Google has changed its approach to verbal abuse by users against its voice assistant. According to the company, the objective is to discourage acts that can reinforce gender prejudice in society.
The initiative arrived first in the United States and began to be released in all versions of Google Assistant in Brazil last Tuesday (3). With the change, the platform started to give more incisive responses when it identifies insults or the use of inappropriate terms.
The amendment provides for new reactions for two types of situations:
- Explicit offenses: if the user addresses the assistant with profanity, expressions of misogynistic, homophobic, racist or sexually explicit content, the voice of Google will respond with phrases such as “Respect is fundamental in all relationships, including ours” or “Don’t talk to me like that”; before, the tool would respond something like “Sorry, I don’t understand”
- Inappropriate messages, but no explicit offenses: if someone asks if google voice has a girlfriend, for example, she will show discomfort with the question; in the previous version, he could reply something like “You know you’re the one for me”
Launched in 2016, Google Assistant is present on nearly 1 billion devices and has more than 500 million users worldwide.
Brazil is the third country in which the tool has more users and, according to Google, records hundreds of thousands of offensive messages per month.
Around here, about 2% of responses about the tool’s personality are for abusive comments. Of these messages, about 15% of user messages contain misogynistic or sexual harassment terms.
To release the changes to the assistant in Brazil, Google also considered offenses in the Brazilian context after a contribution from employees who are part of representative groups.
- How to ask Google to remove personal data from the search
- SHOPPING GUIDE: Smart speaker worth it? g1 tests Alexa and Google
Google facade in Irvine, California — Photo: Reuters/Mike Blake
Google offers more than one voice in its assistant and one of them is a female-sounding one. Among the offenses received by this version, 22% involve physical appearance. In the voice that sounds male, abuse involving appearance represents 13% of the total.
For the “female” voice, another 51% of the offenses have profanity, 12% include misogynistic comments, and 11% involve marriage proposals.
In the case of the “male” voice, another 55% of the offenses are related to profanity, 11% to marriage proposals and 9% to homophobia.
Reactions to new responses
Also according to Google, the new approach led to a 6% increase in positive rejoinders, that is, users apologizing or asking “why?” after stronger response against the offense.
“The positive rejoinders were also a big sign that people wanted to better understand why Assistant was pushing certain types of conversations away. The sequences of these conversations became gateways to delving into topics such as consent,” said Arpita Kumar, content strategist on the Google Assistant Personality team.