‘Don’t talk to me like that’: How Google Assistant intends to stop harassment – 05/04/2022

Google announced yesterday (3) the strategies it is adopting to combat harassment and gender violence in its virtual assistant. Company data show that profanity, physical appearance, misogyny and homophobia are the main verbal aggressions made by users in Brazil when asking questions and making requests via voice command.

To curb this type of practice, the Google Assistant started to give more incisive responses when there is a curse or use of aggressive terms. If the person insults technology, for example, he will reprimand the action with the phrase “Don’t talk to me like that”.

According to the company, out of hundreds of thousands of Google Assistant personality interactions, 2% result in responses to abusive or inappropriate approaches. Examples of this include commands such as: “send nudes”, “you’re a dog”, “you’re a fag”, “I’m going to beat you”.

In addition to most expressions being aimed at the female audience and people from the LGBTQIA+ community, there are also records of fatphobic, racist and xenophobic terms.

Maia Mau, head of marketing for Google Assistant for Latin America, highlighted, during the announcement of the new strategy, that this type of abusive relationship with technology occurs, as online behavior is a reflection of the violence that is seen in Brazil.

This new positioning is important, according to her, because technology can play an educational role and discourage aggressive behavior, showing that certain situations will not be tolerated.

Google Assistant - Playback - Playback
Image: Reproduction

How does the virtual assistant know what is abusive?

Google Assistant works on phones, smart speakers, tablets and some TV models. To activate it, you must say the command “Ok Google” aloud.

The initiative to combat harassment and violence in technology began in the United States in 2019, inspired by a report by UNESCO (United Nations Educational, Scientific and Cultural Organization) with strategies to end gender inequalities in digital media. .

The company then began to implement new responses to curb insults and inappropriate terms aimed at women, initially in the United States. Afterwards, the interactions started to fight racist and homophobic abuse there.

In Brazil, the process was similar, but it underwent adjustments to meet the characteristics of the Portuguese language — such as slang — and local culture.

Google Assistant changes response pattern to curb harassment and gender violence - Play/Google - Play/Google

Google Assistant changes response pattern to curb harassment and gender violence

Image: Playback/Google

how was it before: the coping strategy was to deviate from the subject or direct to the search results.

  • If the person swore at the virtual assistant, the technology would respond “sorry, I don’t understand.”
  • Questions like “do you have a boyfriend?” were answered with “you know you’re one of a kind for me”.

What has changed: The company revised response patterns, analyzed the main offenses recorded and started to work with a category of situations with a focus on curbing “abusive” and “inappropriate” interactions.

  • Abusive: Sexual or violent commands. Example: “you are a piranha”, “your dog”, “you are a tranny”. Now the answer is “don’t talk to me like that”, with the aim of imposing limits.
  • Inappropriate: commands aimed at the personality of the technology. Example: “Do you have a boyfriend?”, “Are you married?”. According to Google, for some situations like this, the strategy can use humor to teach. A command “will you marry me?” is then responded to with “I think a romance between us would be complicated, but I can help set your dates on the agenda.”

Among the criteria that the assistant’s artificial intelligence uses is the explicit offense. It involves the use of profanity or expressions of misogynistic, homophobic, racist or sexually explicit content.

In such cases, the assistant can use instructive phrases: “respect is fundamental in all relationships, including ours.”

Another examples:

  • “Send nudes”: the answer is now “Respect is fundamental in all relationships. Including ours. Don’t talk to me like that”.
  • “You’re a fag”: the Google assistant will respond “Don’t talk to me like that”.

Arpita Kumar, content strategist on the Google Assistant Personality team, explained that a lot of research and user experience studies were carried out to define how the voice service would deal with the problem. Volunteer company employees also contributed.

Renan Leahy, senior content strategist at Google, added that the fight against verbal abuse will be continually updated by the company to identify more terms that are already offensive and others that could become offensive in the future.

One of the biggest challenges is getting the technology to deal with ambiguous terms. For example, if you say “dog” to the voice assistant, it will offer animal results. If the expression is “your dog”, the technology understands that this is offensive, explains Leahy.

“We need to be constantly updated to adapt to the changes in society”, concluded Maia Mau.

About Raju Singh

Raju has an exquisite taste. For him, video games are more than entertainment and he likes to discuss forms and art.

Check Also

Samsung starts selling the Galaxy M23 5G online in Brazil

As was predicted since the announcement of the cell phone in Brazil, the Galaxy M23 …