Even with the warning that “messages and calls are protected with end-to-end encryption” displayed in all chats, WhatsApp can read your messages in some specific situations.
According to a report in the United States by the agency “ProPublica”, Facebook has hired, via an outsourced company, more than a thousand “content moderators” for WhatsApp, just like on other company platforms, such as Facebook and Instagram. Their job is to review messages reported by users.
This work does not break the end-to-end encryption of messages, however. In order for the conversations to be read by the moderators, the user must make the decision to report that account to WhatsApp and, therefore, agree to send a copy of recent messages for the moderators to evaluate the case.
How the “moderators” are analyzed
When you report a contact on WhatsApp for having broken the application’s rules, the last four messages in the conversation with that person are sent in deciphered form to WhatsApp engineers and “moderators”. Automated systems then dispose this data in what is called a “reactive queue” for analysis.
It is important to differentiate this process from what is done on Instagram or Facebook: in it, moderators can delete posts individually. On WhatsApp, no. That’s why Facebook doesn’t call the employees doing this analysis “content moderators” as it does on other platforms.
Well, then, artificial intelligence then starts scanning unencrypted data collected by WhatsApp about its users, forming “proactive queues”, and comparing this information to activities and patterns considered suspicious, such as sending spam, for example.
Unencrypted data is items such as name and profile picture, phone battery level, language, time zone, IP address and internet signal strength.
Employees then have three options for how to act when they receive information from both queues: do nothing, put the user under surveillance for further scrutiny, or ban the reported account.
Still, their performance requires subjective and sensitive judgments. There are several categories in which activities can be considered violations of the WhatsApp term. For example: “spam report”, “bad civil actor” (if the messages are considered hate speech and misinformation), “credible threat of global terrorism”, “exploitative image of children” and “child pornography”.
Professionals interviewed by “ProPublica” say that the guidelines for identifying patterns of violations can be quite explicit. To identify whether a nude person in a photo is a teenager or a child, they use medical images that help make comparisons involving the development of the hip and even the pubic hair of the person portrayed, for example.
Also, although “moderators” speak several languages, some messages are in languages they don’t know. Thus, they need to resort to Facebook’s translation tool, which is not always accurate — it has already confused, for example, messages in Arabic with Spanish.
Information shared with authorities
In addition to the employees in question, even authorities may have access to WhatsApp metadata in some cases. Metadata is information from the “envelope” of the message, not necessarily its content. These are time stamps for sending messages and phone numbers, for example.
This may seem harmless at first glance, but metadata has already been used as evidence in cases that ended in prison. That’s what happened to Natalie Edwards, an employee of the US government’s Treasury Department, who leaked information to the press to file a complaint.
Analyzing metadata, the FBI (US Federal Police) concluded that Edwards exchanged about 70 messages with the reporter who published the documents, during a period of approximately 20 minutes. Edwards was sentenced to six months in prison. She and the reporter had used WhatsApp because they believed it was a secure platform.
wanted by Tilt, WhatsApp reinforced, in a note, that the work of content moderation does not interfere with the encryption of messages.
“WhatsApp provides a way for people to report spam or abuse on the platform, which includes sharing the latest messages in a conversation. This functionality is important to prevent the worst abuses on the internet. WhatsApp strongly disagrees with the idea that accepting reports that users choose to submit to the app is incompatible with end-to-end encryption,” said a WhatsApp spokesperson.
Regarding the authorities’ access to user conversations, WhatsApp clarifies, on its official website, that only metadata is shared when there is a court order — that is, only that unencrypted information, such as profile photos, names and group descriptions , and phone numbers. The content of conversations is not.
“WhatsApp recognizes the work law enforcement authorities do to keep people safe around the world, and we often work in partnership with these authorities to ensure they know how to contact us and make WhatsApp requests. for law enforcement authorities contains an online system so law enforcement authorities can safely submit legal requests,” says the company.