Google engineer says the company’s artificial intelligence has taken on a life of its own

Among his jobs, senior software engineer Blake Lemoine signed up to test Google’s recent artificial intelligence (AI) tool called LaMDA (Language Model for Dialog Applications), announced in May of last year. The system makes use of already known information about a subject to “enrich” the conversation in a natural way, keeping it always “open”. Your language processing is capable of understanding hidden meanings or ambiguity in a human response.

Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop an impartiality algorithm to remove biases from machine learning systems.

publicity

Read too:

In his conversations with LaMDA, the 41-year-old engineer analyzed various conditions, including religious themes and whether artificial intelligence used discriminatory or hateful speech. Lemoine ended up having the perception that the LaMDA was sentient, that is, endowed with sensations or impressions of its own.

Debate with artificial intelligence on the Laws of Robotics

The engineer debated with LaMDA about the third Law of Robotics, devised by Isaac Asimov, which states that robots must protect their own existence – and which the engineer has always understood as a basis for building mechanical slaves. Just to better illustrate what we’re talking about, here are the three laws (and Law Zero):

  • 1st Law: A robot cannot injure a human being or, through inaction, allow a human being to come to harm.
  • 2nd Law: A robot must obey orders given to it by human beings, except where they conflict with the First Law.
  • 3rd Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  • Law Zero, above all others: A robot may not harm humanity or, through inaction, allow humanity to come to harm.

LaMDA then responded to Lemoine with a few questions: Do you think a butler is a slave? What is the difference between a butler and a slave?

When answering that a butler is paid, the engineer got the answer from LaMDA that the system did not need money, “because it was an artificial intelligence”. And it was precisely this level of self-awareness about his own needs that caught Lemoine’s attention.

Their findings were presented to Google. But the company’s vice president, Blaise Aguera y Arcas, and the head of Responsible Innovation, Jen Gennai, rejected their claims. Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, “the evidence does not support his claims.”

“While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,” said Gabriel.

Lemoine has been placed on paid administrative leave from his duties as a researcher in the Responsible AI division (focused on responsible technology in artificial intelligence at Google). In an official note, the senior software engineer said the company alleges violation of its confidentiality policies.

Ethical risks in AI models

Lemoine is not the only one with this impression that AI models are not far from achieving an awareness of their own, or of the risks involved in developments in this direction. Margaret Mitchell, former head of ethics in artificial intelligence at Google, even stresses the need for data transparency from input to output of a system “not just for sentience issues, but also bias and behavior”.

The expert’s history with Google reached an important point early last year, when Mitchell was fired from the company, a month after being investigated for improperly sharing information. At the time, the researcher had also protested against Google after the firing of ethics researcher in artificial intelligence, Timnit Gebru.

Mitchell was also very considerate of Lemoine. When new people joined Google, she would introduce them to the engineer, calling him “Google conscience” for having “the heart and soul to do the right thing”. But for all of Lemoine’s amazement at Google’s natural conversational system (which even motivated him to produce a doc with some of his conversations with LaMBDA), Mitchell saw things differently.

The AI ​​ethicist read an abbreviated version of Lemoine’s document and saw a computer program, not a person. “Our minds are very, very good at constructing realities that are not necessarily true to the larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to be increasingly affected by the illusion.”

In turn, Lemoine said that people have the right to shape technology that can significantly affect their lives. “I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be making all the choices.”

Have you watched the new videos on YouTube from Olhar Digital? Subscribe to the channel!

Image: Lidiia/Shutterstock

About Raju Singh

Raju has an exquisite taste. For him, video games are more than entertainment and he likes to discuss forms and art.

Check Also

How to identify if a message you received is a scam | apps

Scams applied by cell phone have become increasingly frequent, which requires greater attention when checking …