Google engineer fired after claiming company’s AI is self-aware

Google fires another AI team member responsible for pointing out errors in the company's systems

Google fires another AI team member responsible for pointing out errors in the company’s systems

In another clash with experts in its field of artificial intelligence (AI), Google fired an employee after he warned that the company’s natural language software could be self-aware. The information was revealed by the American newspaper Washington Post last Saturday the 11th.

According to the report, Blake Lemoine, an engineer in the responsive AI division, was reportedly given an administrative leave of absence from the company after raising questions about LaMDA, the natural language processing system. Lemoine wrote an article for the company, in April of this year, with the following question: “Is LaMDA self-aware?”, describing a long interview he did with the software to test reactions such as emotions, possible discriminatory responses and self-understanding.

Google, however, said Lemoine violated its confidentiality policy when he shared information about the LaMDA experiment with lawyers and a member of the House Judiciary Committee, which investigates the company in an antitrust case.

After having the license issued, on June 6, the engineer published the transcript of the interview he had with LaMDA. According to him, it is possible to consider the software self-aware because the system “has feelings, emotions and subjective experience”.

Lemoine analyzed the answers he considered convincing that AI produced, based on premises of what would be robotic rights and ethics. At one point in the conversation, the engineer asked LaMDA to explain some feelings and fears. The AI’s response was that it was afraid of being shut down, which would be the equivalent of death for it.

The search giant, however, denied the engineer and said that there is no evidence that it might be possible to have a self-aware AI inside LaMDA. “Our team – including ethicists and technologists – has reviewed Blake’s concerns in line with our AI Principles and informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was self-aware (and there is a lot of evidence against it),” Google spokesperson Brian Gabriel told the Washington Post.

“Of course, some in the wider AI community are considering the long-term possibility of self-aware AI, but it makes no sense to anthropomorphize today’s conversational models, which are not,” said Gabriel. “These systems mimic the types of exchange found in millions of sentences and can draft on any fantastic topic.”


It is not the first time that Google has gained the spotlight for changes in the company’s AI area. In 2020, Timnit Gebru, a former ethics scientist at Google’s AI division, was fired after raising concerns about an ethnic bias in the company’s intelligence tools.

In an email sent to employees, Timnit said she was frustrated with the gender diversity at the unit and questioned whether her bosses reviewed her work more rigorously than that of people of different backgrounds. Timnit is a co-founder of the non-profit organization “Black in AI”, which aims to increase the representation of people of color in artificial intelligence, and co-authored a leading paper on bias in facial analysis technology.

In a series of tweets on the subject, Timnit revealed that a supervisor had accepted the scientist’s resignation after the emails were sent – according to the employee, the request had never been made.

In March of this year, Satrajit Chatterjee, a researcher in the field at Google, was fired after leading a team of scientists to question a published work on some skills attributed to AI in the manufacture of computer chips.

The article, published in the scientific journal nature, claims that computers are capable of designing some components of a computer chip faster and better than humans. Google, however, told the team that it would not publish an article that disproves some of the claims published in the nature.


Introduced in 2021, LaMDA is a natural language processing system similar to Bert and GPT-3, capable of generating responses closer to a human conversation. Launched as a product in development, LaMDA appeared at Google I/O 2022 with an updated version – the company promised to be the “most conversational intelligence of all time”. According to the company, the system has been updated to correct errors in responses considered offensive or incorrect.

With a focus on dialogues, the technology is able to build more comprehensive dialogues than a voice assistant, incorporating information from different possible contexts. That’s why, for example, LaMDA is able to produce an answer to questions like “what would a world made of ice cream look like?” — from the information of what the concepts “world” and “ice cream” are, intelligence is able to develop a line of reasoning about the issue./WITH THE NEW YORK TIMES

About Raju Singh

Raju has an exquisite taste. For him, video games are more than entertainment and he likes to discuss forms and art.

Check Also

Anatel opens consultation to standardize cell phone chargers

This past Tuesday (28), National Telecommunications Agency (Anatel) made available a public consultation on the …