- Paula Adamo Idoeta
- From BBC News Brasil in São Paulo
Stuart Russell, a professor at the University of California at Berkeley, has been devoted to the study of Artificial Intelligence (AI) for decades, but he is also one of its best-known critics – at least of the AI model he still sees as “standard” by world.
But – unlike Hollywood movie plots on the subject – not because he thinks these technologies will become conscious and backfire on us.
Russell’s main concern is with the way this intelligence has been programmed by their human developers: they are tasked with optimizing their tasks as much as possible, basically at any cost.
And so they become “blind” and indifferent to the problems (or, ultimately, the destruction) they can cause humans.
To explain this to BBC News Brasil, Russell uses the metaphor of a genie in a lamp fulfilling his master’s wishes: “you ask the genie to make you the richest person in the world, and so it happens – but only because the genie made the the rest of the people disappear,” he says.
“(In AI) we build machines with what I call standard models: they are given goals that they have to achieve or optimize, (ie) for which they find the best possible solution. And then they carry out that action.”
Even if this action is, in practice, harmful to humans, he argues.
“If we build Artificial Intelligence so as to optimize a fixed goal given by us, they (machines) will be almost like psychopaths – pursuing that goal and being completely oblivious to everything else, even if we ask them to stop.”
The main task of these algorithms is to enhance the user’s experience on social networks – for example, collecting as much information as possible about that user and providing them with content that suits their preferences, making them stay connected longer.
Even if this comes at the expense of the user’s well-being or global citizenship, the researcher continues.
“Social networks create addiction, depression, social dysfunction, maybe extremism, polarization of society, maybe they contribute to spreading disinformation. And it’s clear that their algorithms are designed to optimize one goal: that people click, that they spend more time engaged with the content,” Russell points out.
“And by optimizing these quantities, they could be causing enormous problems for society.”
However, Russell continues, these algorithms do not undergo enough scrutiny to be verified or “fixed” – so they continue to work to optimize their objective, regardless of collateral damage.
“(Social networks) are not only optimizing the wrong thing, they are also manipulating people, because manipulating them increases their engagement. If I can make you more predictable, for example by turning you into an extreme eco-terrorist , I can send you eco-terrorist content and make sure you click, and so maximize my clicks.”
These criticisms were reinforced this Tuesday (October 5) by former Facebook employee (and current informant) Frances Haugen, who testified at a US Congressional hearing and said that the social networking sites and applications “do harm to children, they provoke divisions and undermine democracy.” Facebook has reacted by saying that Haugen does not have enough knowledge to make such claims.
Russell, in turn, will detail his theories to an audience of Brazilian researchers on October 13, during the magna conference of the meeting of the Brazilian Academy of Sciences, virtually.
The researcher, author of Human Compatibility: Artificial Intelligence and the Control Problem (no version in Brazil), is considered a pioneer in the field that he calls “Artificial Intelligence compatible with human existence”.
“We need a completely different type of AI systems,” he tells BBC News Brasil.
This type of AI, he goes on, would have to “know” that it has limitations, that it cannot accomplish its goals at any cost, and that, even if it is a machine, it can be wrong.
“It would make this intelligence behave in a completely different way, more cautious, (…) that it will ask permission before doing something when it is not sure if it is what we want. And, in the most extreme case, that it wants to be turned off so as not to do something that will harm us. That is my main message.”
The theory defended by Russell is not a consensus: there are those who do not consider this current model of Artificial Intelligence to be threatening.
A famous example of both sides of this debate occurred a few years ago, in a public disagreement between tech entrepreneurs Mark Zuckerberg and Elon Musk.
A New York Times report points out that at a dinner in 2014, the two businessmen debated each other: Musk pointed out that he “genuinely believed in the danger” of Artificial Intelligence becoming superior and subjugating humans.
Zuckerberg, however, opined that Musk was being an alarmist.
In an interview that same year, the creator of Facebook considered himself an “optimist” about Artificial Intelligence and claimed that critics such as Musk “were painting apocalyptic and irresponsible scenarios”.
“Whenever I hear people saying that AI is going to hurt people in the future, I think technology can often be used for good and for bad, and you need to be careful about how you build it and how it’s going to be used. But I find it questionable to argue for slowing down the AI process. I can’t understand that.”
Musk has argued that AI is “potentially more dangerous than nuclear warheads”.
A slow and invisible ‘nuclear disaster’
Stuart Russell adds to Musk’s concern and also draws parallels to the dangers of the nuclear race.
“I think many (tech experts) find this argument (of the dangers of AI) threatening because it basically says, ‘the discipline we’ve been in for several decades is potentially a big risk.’ Some people see this as being anti-intelligence Artificial,” Russell argues.
“Mark Zuckerberg thinks Elon Musk’s comments are anti-AI, but that strikes me as ridiculous. It’s like saying the warning that a nuclear bomb might explode is an anti-physical argument. It’s not anti-physics, it’s a complement to physics, for having created a technology so powerful that it could destroy the world. And we actually had (the nuclear accidents of) Chernobyl, Fukushima, and the industry was decimated because it didn’t pay enough attention to the risks. get the benefits of AI, you have to pay attention to the risks.”
The current lack of control over social media algorithms, Russell argues, can cause “huge problems for society” on a global scale as well, but, unlike a nuclear disaster, “slowly and almost invisibly”.
How, then, to reverse this course?
For Russell, a complete redesign of social media algorithms may be needed. But, first, it is necessary to know them in depth, he says.
‘Find out what causes polarization’
Russell points out that on Facebook, for example, not even the independent board charged with overseeing the social network has full access to the algorithm that curates the content seen by users.
“But there is a large group of researchers and a large project underway at the Global Partnership in AI (GPAI) working with a large social network that I can’t identify, to gain access to data and carry out experiments,” he says. Russell.
“The main thing is to experiment with control groups, see with people what’s causing social polarization and depression, and (see) whether changing the algorithm improves that.”
“I’m not telling people to stop using social media or that they’re inherently evil,” Russell continues. “(The problem) is the way the algorithms work, the use of likes, uploading content (based on preferences) or dropping them. The way the algorithm chooses what to put in your feed seems to be based into metrics that are harmful to people. So we need to put user benefit as the main goal and that will make things work better and people will be happy to use their systems.”
There will not be a single answer as to what is “beneficial”. Therefore, argues the researcher, the algorithms will have to adapt this concept to each user, individually – a task that, he admits, is far from easy. “In fact, this (social media area) would be one of the most difficult in which to put this new AI model into practice,” he says.
“I think they really had to start the whole thing from scratch. It’s possible that we’ll come to understand the difference between acceptable and unacceptable manipulation. For example, in the educational system, we manipulate children to make them knowledgeable, capable, successful, and successful citizens. integrated — and we think that’s acceptable. But if the same process turned children into terrorists, it would be unacceptable manipulation. How, exactly, do you differentiate between the two? It’s a very difficult question. have difficulty answering.”
Have watched our new videos on YouTube? Subscribe to our channel!