Facebook’s internal rulebooks on sex, terrorism, hate speech, violence, and more have been leaked, and according to a report from the Guardian, many will find Facebook’s internal policies to be questionable. The documents – over 100 internal training manuals, spreadsheets and flowcharts, according to the Guardian – were revealed by a Guardian investigation termed “The Facebook Files” giving a first, significant look into Facebook’s actual internal content policies.
Among the more interesting excerpts, Facebook will allow users to live-stream self-harm, on the basis that it “doesn’t want to censor or punish people in distress.” Photos of non-sexual physical abuse and bullying of children are allowable so long as there is no sadistic or celebratory element. Videos of violent death, while marked as disturbing, are also generally allowed, on the basis that they create “awareness” of issues such as mental illness. Animal cruelty is treated similarly; “celebrating” is not allowed, but it is otherwise acceptable to raise “awareness” of the issue.
Perhaps most interesting is Facebook’s extremely variable policy on threats of death and violence.
According to the documents, most threats of violence are to be considered either generic, or not credible. According to Facebook, “people use violent language to express frustration online” and they should feel “safe to do so” on Facebook. Specific examples of allowable, and forbidden, threats are included.
Among the acceptable threats are such examples as “Little girl needs to keep to herself before daddy breaks her face,” “Unless you stop b***hing I’ll have to cut your tongue out,” “Let’s beat up fat kids,” “To snap a b***h’s neck, make sure to apply all your pressure to the middle of her throat,” and “I hope someone kills you.”
Unacceptable threats include “Someone shoot Trump,” and “I’ll destroy the Facebook Dublin office.”
“People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways,” adds the handbook.
According to The Guardian, Facebook does employ some 4,500 “content moderators” in moderating hubs around the world, to oversee the posts of some 2 billion monthly users, and recently announced that they plan to hire another 3,000. But most of them are employed by “subcontractors” – essentially, call centers. Facebook also isn’t forthcoming on where those employees are located. Essentially, most of Facebook’s content moderators are call center employees, supported by automated systems and overseen by a handful of “subject matter experts” who review the quality of their work and pass on guideline changes.
Content moderators are given two weeks of training and the recently-leaked manuals. Along with a tool called the “single review tool” designed to help moderators filter content, they’re thrown into the wild to determine whether millions of daily reports should be ignored, escalated, or deleted.
Facebook also has a system called the “cruelty checkpoint” where a moderator will contact a person who posted something which was reported, requesting – but not enforcing – that they consider removing it, as someone found it upsetting. If the user continues to post material which gets reported, it can result in a temporary ban.
According to members of the moderation team that The Guardian spoke to, they often feel overwhelmed by the sheer volume of posts they’re required to review, and they make mistakes; the stress of the job is allegedly considerable and has resulted in a high rate of turnover, leaving few experienced senior content moderators.
Facebook, at least, acknowledged that their moderators “have a challenging and difficult job. A lot of the content is upsetting. We want to make sure the reviewers are able to gain enough confidence to make the right decision, but also have the mental and emotional resources to stay healthy. This is another big challenge for us.”
[Featured Image by Dan Kitwood/Getty Images]