How LaMDA works, Google’s artificial brain ‘accused’ by an engineer of having a conscience of its own | World

A thinking and conscious machine. That’s how Google engineer Blake Lemoine defined LaMDA — Google’s artificial intelligence system.

Lemoine was removed from his duties by the company.

  • Artificial intelligence helps decipher lost texts from ancient civilizations
  • Solutions with artificial intelligence range from robots that monitor plantations to biological control of sugarcane

“Our team — which includes ethics and technology experts — has reviewed Blake’s concerns in line with our AI Principles and informed him that the evidence does not support his claims,” ​​Brian Gabriel, a Google spokesperson, said in a statement.

But how does this machine work?

If we remember the old science fiction movies, we can imagine LaMDA as a robot that assumes human form, opens its eyes, gains consciousness and speaks. Or as HAL-9000, the supercomputer from the movie 2001: A Space Odyssey — which, in a parody of The Simpsons, is voiced by Pierce Brosnan in the English original, falls in love with Marge and wants to kill Homer.

But the reality is a little more complex. LaMDA is an artificial brain and is housed in the cloud. His feed is made up of millions of texts and he does his training himself. But on the other hand, he acts like a parrot.

Very complicated? Let’s go by parts, to understand better.

LaMDA is a huge neural network that trains itself — Photo: BBC/GETTY IMAGES

LaMDA (Language Model for Dialog Applications) was designed by Google in 2017. Its basis is a transformer, that is, a tangle of deep artificial neural networks.

“This neural network trains itself with large amounts of text. But the learning has a goal, which is presented in the form of a game. It has a complete sentence, but a word is missing, and the system has to guess it.” explains Julio Gonzalo Arroyo, professor at Uned (National University of Distance Education) in Spain and principal researcher in the department of natural language processing and information retrieval.

The system plays with itself. He puts words in by trial and error, and when he misses, he acts like a children’s activity book — he looks at the last few pages, sees the correct answer and so he goes on correcting and refining the parameters.

And he also “identifies the meaning of each word and observes the other words around it,” according to Gonzalo Arroyo. Thus, he becomes an expert at predicting patterns and words. It’s a process similar to predictive text on cell phones, but raised to the nth power, with a much larger memory.

Quality, specific and interesting responses

But LaMDA also creates fluid and spontaneous responses — and, according to Google, with the ability to recreate the dynamism and recognize the nuances of human conversation. In short: that they don’t look like they were created by a robot.

LaMDA has an extraordinary ability to intuit which words are most appropriate in each context — Photo: BBC/GETTY IMAGES

This fluidity is one of Google’s goals, according to its technology blog. And they say they can achieve that goal by ensuring that the answers are quality, specific and show interest.

In order for them to have quality, the answers need to make sense. If I say to LaMDA, for example, “I started playing guitar,” he should respond with something related to my information, not anything meaningless.

In order for the second objective to be met (the specific answer), LaMDA should not respond with “very good”, but something more specific, such as: “which guitar brand do you prefer, Gibson or Fender?”

And for the system to provide answers that demonstrate interest and insight, it must reach a higher level. For example: “The Fender Stratocaster is a good guitar, but Brian May’s Red Special is unique.”

The key to responding at this level of detail is self-training. “After reading billions of words, [o sistema] has an extraordinary ability to intuit which words are most appropriate in each context.”

For experts in artificial intelligence, transformers such as LaMDA pose a challenge because “they allow [de informações ou textos] very efficient and produced a veritable revolution in the field of natural language processing”.

Another goal of LaMDA’s training is not to create “violent or inhumane content, not to promote slander or hate speech against groups of people, and not to contain blasphemy,” according to Google’s artificial intelligence (AI) blog.

You also want answers to be fact-based and to have known external sources.

“With LaMDA, we’re taking a careful and thoughtful approach to better address valid concerns about fairness and truthfulness,” said Brian Gabriel, a Google spokesperson.

He argues that the system has already passed 11 different revisions to the AI ​​Principles, “in addition to rigorous research and testing based on fundamental measures of quality, safety, and the system’s ability to produce fact-based statements.”

But how can a system like LaMDA not be biased or present hateful messages? “The secret is to select which data [quais fontes de texto] must be fed into the system”, says Gonzalo Arroyo.

But this is not easy. “The way we communicate reflects our trends and the machines learn that. It’s difficult to eliminate them from the training data without losing representativeness”, he explains.

That is, it is possible for trends to emerge.

“If you feed news about Queen Leticia [da Espanha]all commenting on what clothes she wears, it is possible that when someone asks the system about her, he repeats this sexist pattern and talks about clothes and not other things”, highlights the professor.

In 1966, a system called ELIZA was designed that applied very simple patterns to simulate the dialogue of a psychotherapist.

“The system encouraged the patient to tell more, whatever the topic of conversation, and triggered patterns such as ‘if you mention the word family, ask how your relationship with your mother is'”, says Gonzalo.

Some people even thought that ELIZA was really a therapist — they even claimed that she had helped them. “We human beings are relatively easily deceived,” says Gonzalo Arroyo.

For him, Lemoine’s claim that LaMDA has gained self-awareness “is an exaggeration.” According to the professor, statements like Lemoine’s do not help to maintain a healthy debate about artificial intelligence.

“Listening to this kind of nonsense is not beneficial. We run the risk of it becoming mania and people thinking we are in Matrix mode, with the machines ready to finish us. This is something remote, it is utopian. I don’t think it helps to have a thoughtful conversation about the benefits of artificial intelligence,” according to Gonzalo Arroyo.

As much as the conversation is fluid, quality and specific, “it is nothing more than a gigantic formula that adjusts the parameters to better predict the next word. He has no idea what he’s talking about.”

Google’s answer is similar. “These systems mimic the types of exchanges found in millions of sentences and can talk about any fantastic topic. If you ask what it’s like to be a frozen dinosaur, they can generate texts about melting, roaring, etc.,” explains Google’s Gabriel.

American researchers Emily Bender and Timnit Gebru compared these language creation systems to “random parrots,” which repeat words at random.

That’s why, as Spanish researchers Ariel Guersenvaig and Ramón Sangüesa said, transformers like LaMDA understand what they write as much as a parrot singing.

This text was originally published on BBC News Brasil.

About Raju Singh

Raju has an exquisite taste. For him, video games are more than entertainment and he likes to discuss forms and art.

Check Also

10 Old Cell Phones With Noisy Radio Function | Cell

Cell phones with radio function became a fever in the early 2000s. At the time, …