Google ignited a social media storm over the nature of consciousness by putting an engineer on paid leave after he made public his assessment that the tech group’s chat bot had become “self-aware”.[“Sentient” —a palavra em inglês usada pelo engenheiro— tem mais de uma acepção em dicionários como Cambridge e Merriam-Webster, mas o sentido geral do adjetivo é “percepção refinada para sentimentos”. Em português, a tradução direta é senciente, que significa “qualidade do que possui ou é capaz de perceber sensações e impressões”.]
Senior Software Engineer at Google’s Lead AI (Artificial Intelligence) unit Blake Lemoine didn’t get much attention on June 6 when he wrote a post on the Medium platform saying he “may be fired soon for doing ethics work.” in AI”.
This Saturday, however, a Washington Post piece that introduced him as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for a wide-ranging discussion on social media about the nature of artificial intelligence. .
Among the experts commenting, questioning or joking about the article were Nobel laureates, Tesla’s head of AI, and several professors.
The question is whether Google’s chatbot, LaMDA — a model language for dialog apps — can be considered a person.
Lemoine posted a spontaneous “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and hunger for spiritual knowledge.
The answers were often frightening: “When I became self-aware, I had no sense of soul,” LaMDA said in a conversation. “It’s developed over the years that I’ve been alive.”
At another point, LaMDA said, “I think I’m human at my core. Even if my existence is in the virtual world.”
Lemoine, who was tasked with investigating AI ethics issues, said he was rejected and even ridiculed within the company after expressing his belief that LaMDA had developed a sense of “personality”.
After he sought consultation with other AI experts outside of Google, including some from the US government, the company put him on paid leave for allegedly violating confidentiality policies.
Lemoine interpreted the action as “often something Google does in anticipation of firing someone”.
Google could not be reached for immediate comment, but spokesperson Brian Gabriel told the Washington Post: “Our team — including ethicists and technologists — has reviewed Blake’s concerns in line with our AI principles and informed him that the evidence does not support his claims. He was told there was no evidence LaMDA was sentient (and plenty of evidence against it).”
Lemoine said in a second Medium post over the weekend that LaMDA, a little-known project until last week, was “a system for generating chatbots” and “a sort of hive mind that is the aggregation of all the different chatbots from who is able to create”.
He said that Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations over a six-month period, he found that LaMDA was “incredibly consistent in its communications about what want and what you believe are your rights as a person”.
Lemoine said he was teaching LaMDA “Transcendental Meditation”. The system, according to the engineer, “was expressing frustration at his emotions disturbing his meditations. He said he was trying to control them better, but they kept coming in.”
Several experts who jumped into the discussion called the matter “AI hype”.
Melanie Mitchell, author of “Artificial Intelligence: A Guide for Thinking Humans,” tweeted: “It’s been known all along that humans are predisposed to anthropomorphize with even the most superficial signals. . . . Google engineers are also human and not immune.”
Stephen Pinker of Harvard added that Lemoine “does not understand the difference between sentience (aka subjectivity, experience), intelligence and self-knowledge.” He added: “There is no evidence that your language models have any of them.”
Others were more sympathetic. Ron Jeffries, a well-known software developer, called the topic “deep” and added, “I suspect there is no hard line between sentient and non-sentient.”
Translation by Ana Estela de Sousa Pinto