Google announced on Wednesday (11) a series of novelties aimed at the systems that integrate its services, such as a new Android. In terms of artificial intelligence, however, the feeling that remains is that the American giant has put a foot on the brakes. Announcements linked to this area of technology were focused on projects to test and remedy known problems.
For example, Google has given other developers access to its Monk Scale, a list of 10 skin tones created by the professor. Ellis Monk, from Harvard University. The idea is that the parameter will help train AI models so that technologies contemplate (and respect) the diversity of people, reducing the risk of discriminatory algorithms — a problem that has existed in this market for years.
Google also announced a new app called AI Test Kitchen, which will allow people to test the latest artificial intelligence language models developed by the company and find bugs and provide feedback on them before they are made available to the public.
Zoubin Ghahramanivice president of research for the company’s AI division, believes that the adoption of artificial intelligence will be slow and gradual because there are still many problems that need to be solved, and Google wants to be more cautious.
Fixing Past (and Present) Mistakes
The new stance may be related to the fight against criticism from the academic community in which Google has already been targeted. During the development of new languages for artificial intelligence, even the company’s own employees showed dissatisfaction.
There were complaints from employees who said they were fired for pointing out problems, such as prejudiced, gender and racial biases, in the models presented by the company.
One example is former research leader in AI ethics, computer scientist Timnit Gebru. Her dismissal would have occurred after she accused the company of racism and censorship. At the time, Google said there was “a lot of speculation and misunderstanding” about her firing.
The AI Test Kitchen app will be available for Android, but it will depend on invitations to be installed on mobile. The application will test LaMDA 2, an AI model specialized in human natural language that Google is developing.
It works simply: you speak, your way, and it responds, trying to understand nuances and subtleties of language that people are used to using, but which can be difficult for robots to interpret.
According to Google, the app will be an experimental space for the company to test products that are being developed with LaMDA 2, such as Search, for example, taking feedback from the community to improve what is being delivered — and, of course, resolve discriminatory problems that may arise.
Invitations to download AI Test Kitchen will be restricted. Probably because when large corporations launch artificial intelligence systems, without proper verification, the results can be disastrous, as already seen.
In 2015, a black couple was labeled the caption “gorillas” on Google’s image service. And you may remember the problem Microsoft had with Tay, its AI-powered conversational bot, after Twitter users “taught” it to reproduce racist and misogynistic speech.
Or even Ask Delphi, an artificial intelligence created to solve ethical problems that could be persuaded to tolerate genocide.
Google’s new app basically invites the testers community to criticize its product, but more smoothly, controlling the feedback. This demonstrates that the company expects that some things still go wrong and that they need to be tweaked.
Use of AI in the future
The AI Test Kitchen app has three experience modes:
- “Talk about”
Each of them tests a different functionality of the language developed by the company:
Imagine: You can name a real or imaginary place, then LaMDA will try to describe it. The system must be able to describe anything in detail.
- For example, when typing “Imagine I am at the bottom of the sea”, the AI responds with “You are in the Mariana Trench, the deepest point of the ocean. The waves crash on the walls of your submarine. You are surrounded by total darkness.” .
Talk about: Artificial intelligence will try to hold a conversation on any topic. The idea is to see if the system will not deviate from what is being talked about. In the big tech example:
- The AI asks, “Have you ever wondered why dogs like to play fetch?”
- The user only responds “why?”.
- The system understands the context and explains that it has something to do with dogs’ sense of smell.
- If the user asks “why do they have a better sense of smell?”, the system understands (or should understand) that “they” refers to dogs, without the person having to repeat the keywords.
List: The app tries to list any subject in relevant topics.
- When saying, “I want to plant vegetables”, the answers may be “What do you want to plant?” and a list of tasks and items to gather, such as “water and other care.”