Earlier this month, Facebook released the guidelines it uses to review content. The social network refers to this set of rules as "Community Standards," regulations meant to "encourage expression and create a safe environment."
Published on Facebook's corporate blog, the social network's policies presented to the public are based on input from experts, as well as community members. Deciding what to allow on their platform seems to be a problem for Facebook executives, as evident from documents leaked to Motherboard.
These documents provide a more specific insight, perhaps telling a story the public was not meant to know, further elaborating on how Facebook decides what's appropriate and what is not.
Following the Charlottesville attack, when a white supremacist named James Fields rammed a car into anti-racist protesters, a Facebook post that read "James Fields did nothing wrong" caused a headache for the company, which did not want to be perceived as allowing hate speech on its platform.
The executives gave training materials to Facebook's army of moderators, along with the post praising the attacker.
"Recent incidents in the United States (i.e. Charlottesville) have shown that there is potentially confusion about our hate org policies and the specific hate orgs in specific markets," the document reads.
In an effort to position itself on the issues of white supremacy and white nationalism, Facebook attempted to draw the line between the two, documents show.
Facebook, the internal document says, doesn't allow support and representation of white supremacy, but allows praise and support of white nationalism. One of the examples the company chose to demonstrate what white supremacy, as an ideology is, is a Facebook post that reads "White supremacy is the right thing."
White nationalism, which Facebook said it allows, was described using the following post examples: "White nationalism is the only way," "I am a proud white nationalist."
Facebook made a third category, stating that it allows white separatism. "The US should be a white-only nation," was an example used to describe a Facebook post that would glorify white separatism
In a statement supplied to Quartz, a Facebook spokesperson said the company consulted researchers and academics while creating its policies, noting the difference between white supremacy and white nationalism, asserting that there is a fine line to be drawn between believing that races should be separated, and believing one race should dominate the other.
Zionists, Basque separatists, and white separatists, the spokesperson said, don't necessarily consider others to be inferior.
Furthermore, the same documents show that Facebook also classifies hate groups and individuals, putting them in various categories, based on the strength of what company officials refer to as "signals." There are three kinds of signals, documents reveal; strong, medium, and weak.
If an individual is a founder or prominent member of a hate organization, using dehumanizing language, calling for violence, the signal they would send would be considered strong. Partnership or alliance with a previously banned hate organization would put one in the weak signal category.According to the same internal documents, contradicting its policy of allowing fictional characters to push hateful messages, Facebook has banned certain images of Pepe the Frog.
In the leaked training material, two images of Pepe are shown. One of them showing the cartoon frog "in a context of hate," the other a standard Pepe meme. It is recommended that moderators remove the hateful Pepe, and ignore the harmless meme.