WhatsApp has roughly 1000 contract workers around the globe reviewing millions of pieces of user content. This information comes from a report by ProPublica.
When ProPublica pressed them on the issue, WhatsApp spokespeople stuck to the company’s privacy narrative.
WhatsApp’s “signature feature” has long been end-to-end encryption. When touting the feature, Facebook CEO Mark Zuckerberg emphasized how WhatsApp messages are so secure not even his company can read them.
As it turns out, that’s not exactly the case.
Contract workers in Dublin, Texas, and Singapore use special Facebook software to sift through millions of user messages, images, and videos. Users must report content as improper. AI then scan it and forward it to the moderators.
ProPublica was told that the AI program sends a lot of harmless posts through. Once the content reaches them, moderators can see the last five messages in the thread.
More worryingly, ProPublica also notes that WhatsApp collects all the metadata of users regardless of their privacy settings
Earlier this year WhatsApp publicly resisted a request by the Indian Government that it said would “break encryption”. ProPublica contradicts that “narrative” with its findings and looking further, implies a narrative is all it might be.
ProPublica obtained an internal WhatsApp marketing presentation that emphasized “fierce” promotion of WhatsApp’s “privacy narrative”. The presentation goes on to compare WhatsApp’s “brand character” to the “immigrant mother” in a slide titled “brand tone parameters”.
The slide contained photos of activist Malala Yousafzai. The presentation makes no mention of WhatsApp’s content moderation.
When questioned, WhatsApp’s director of communications, Carl Woog said that the company doesn’t consider it moderation. Specifically, he said “We actually don’t typically use that term for WhatsApp.”
All the same, he did not specify how it isn’t moderation.
Facebook said it considers the reported messages as direct communication between a user and WhatsApp, and so by its logic it doesn’t constitute a breaking of end-to-end encryption.
It points out how the user opted to share the content with them.
It should be taken into account that the content of that communication contains the messages of other parties who are entirely unaware the exchange is taking place. That could be either another user who isn’t the reporter, or any number of participants in a group chat.
The Facebook Family
Facebook is a lot more straightforward when detailing moderation policies on its other platforms—Facebook itself, and Instagram.
Facebook tries to differentiate what it does at WhatsApp from those policies, but ProPublica’s interviews with WhatsApp moderators show that the difference lies in how moderators access the content.
Once a user reports something, the message and the last four are unencrypted and viewable by the moderator.
ProPublica implies that this is not end-to-end encryption*. True end-to-end encryption means that only the sender and recipient can read the content. It also means that a service can’t possibly share content with authorities, as WhatsApp has done.
The ProPublica report emphasizes that WhatsApp can read users’ messages without consent. The company’s focus on end-to-end encryption relies on its specific interpretation. Critics are challenging this interpretation.
*After significant criticism, ProPublica has reworded sections of its report to remove the implication that WhatsApp’s abuse report system breaks end-to-end encryption.