Blake Lemoine, the Google engineer who believed a company robot had become conscious, traveled on mayonnaise. We clearly don’t yet have the technology to manufacture self-aware machines.
The robot that Lemoine interacted with to reach his hasty conclusion is an artificial intelligence that delves into trillions of pages of human conversation and, from that formidable sample, attempts to produce convincing dialogue. It is imitation and not reflective thought. It suffices to pass the Turing test, which has become somewhat obsolete, but not to be classified as an object of moral consideration.
Lemoine, however, raises a very interesting question. Will we ever see artificial intelligences with consciousness? The book by Anil Seth, of which I spoke a little while ago, has luminous passages on this question, while remaining agnostic about the answer. For most philosophers of mind, who embrace functionalism, consciousness is information processing, no matter what medium it takes place in.
It doesn’t matter if they are living cells or integrated circuits. Along the same lines, but more naive, are the enthusiasts of singularity, for whom it is enough to cross a certain threshold of intelligence to reach consciousness.
Seth remembers that things can be more complicated. It all depends, of course, on the model of consciousness we embrace. A popular one among neuroscientists is that of consciousness as a self-monitoring system. The need to stay alive and in homeostasis made us develop feelings, emotions and continually access them to know how we are and to anticipate the challenges that the world imposes on us. On this model, it is difficult, though not impossible, to separate consciousness from a visceral materiality of life.
The moral of the story, for cognitive science and for life, is that intelligence is not synonymous with conscience.
LINK PRESENT: Did you like this text? Subscriber can release five free accesses of any link per day. Just click the blue F below.