WhatsApp has a thousand moderators to read messages after complaints, says website – 08/09/2021 – Mercado

Workers hired by Facebook to receive complaints from users on WhatsApp can view at least five messages from conversations, shows a ProPublica report published on Tuesday (7), which explains how the company’s account analysis system works.

Although it is public that WhatsApp has an operation with people to analyze user complaints, this model is not as explicit as that of Facebook and Instagram, which belong to the same group. According to the report, the work works in a similar way to the two other platforms.

Outsourced teams work from offices in Austin, Texas (USA), Dublin (Ireland) and Singapore, where they review user content after notifications. There are about a thousand workers responsible for reviewing denounced texts, videos and images.

It’s up to these people to judge what appears on the screen, which is typically done in less than a minute, according to ProPublica.

As this is an end-to-end encryption platform, which does not allow a third party to intrude into a conversation between two people (even if the third party is WhatsApp), the question is how moderators receive whistleblower content.

Everything indicates that, when clicking the “report” button, the person generates an automatic encrypted channel between their chat and Facebook, which can generate the idea of ​​breaking encryption —one of WhatsApp’s greatest strategic assets.

Experts, however, say that when someone voluntarily reports content and sends it encrypted to the company, it is because they expect their request to be granted, and that this is not a breach of encryption.

In a note, WhatsApp says it “strongly disagrees with the idea that accepting reports that users choose to send to the app is incompatible with end-to-end encryption.”

The presence of these analysis groups makes the application more combative to spam and illegal content, but shows that the transparency strategy differs from Facebook and Instagram, which publish reports on what was reported as inappropriate and how the company reacted to the content.

Not all behaviors in disagreement with the terms of use — such as harassment, bullying or hate speech — are detected only by artificial intelligence systems on social networks, and human teams are hired to filter what is reported.

According to ProPublica, moderators told the report that the company’s artificial intelligence program sends an excessive amount of harmless content, “like children in bathtubs.” When that message arrives, they can see the user’s last five messages.

In its terms of use, WhatsApp says that when an account is reported, it “receives the most recent messages” from the reported group or user, as well as “information about your recent interactions with the reported user”.

According to ProPublica, moderators can also view metadata such as phone numbers, profile photos, linked Facebook and Instagram accounts, IP address and cell phone ID.

“WhatsApp provides a way for people to report spam or abuse on the platform, which includes sharing the latest messages in a conversation. This functionality is important to prevent the worst abuses on the internet,” the company said in a statement.

WhatsApp says it works with partners who assist in the work of reviewing and responding to user questions and complaints, as well as complying with legal obligations and responding to abuse that has been submitted.

In a statement, he also cites a study by the US Center for Democracy and Technology that points to a “variety of cryptographic schemes proposed to enable users to denounce (by users to service providers) of messages with end-to-end encryption” .

Such solutions, the study says, are designed so that the message “can only be decrypted and verified by the service provider and none other than the original sender and recipients.”