A thin blue line: how Facebook deals with controversial content

Share:

\"\"

Stories of Facebook allowing beheading videos but removing breastfeeding images, and then reversing the decision to allow graphic violence after public uproar, has led many to question how Facebook should treat controversial content.

Australian users, for example, were enraged by Facebook’s initial refusal to take down an Aboriginal Memes page last year, although Facebook also later reversed that decision and removed the racist page.

Behind the argument over what Facebook “should” or “should not” do, however, is the more complex question of what it actually does.

We know that the situation is complex. Facebook is free for users but its customers are advertisers, so it depends on leveraging value from the aggregation of data about user sharing. It needs to keep both customers and users confident that they can share what they want but secure from content that might lead them to complain or leave.

Users are responsible for the content they upload and Facebook expects users to treat that responsibility seriously. Notwithstanding Facebook’s responsibility in arguments about the correlation between ease and likelihood of uploading, as well as social media’s role in the amplification of content, users choose to upload content and Facebook has to deal with the fallout.

As such it needs to make decisions to allow or remove content at a very difficult nexus of freedom of expression, community standards, ethics, the law, and commercial viability.

This difficulty far exceeds the editorial decisions of traditional media companies, primarily because the content is user-generated, using almost every human method of communication, and orders of magnitude larger in volume. With around one billion accounts, solutions also need to be scaleable.

So how does Facebook itself provide and explain its decision-making processes?

Choices

Facebook’s stated mission (at the time of writing) is “to give people the power to share and make the world more open and connected”.

There is some irony here, too. Mark Zuckerberg has long believed that users should be open, but one might question the openness of a platform that is really a walled garden designed to put itself at the centre of the Internet.

Still, the mission of giving “people power to share” is laudable and Facebook has Community Standards and Statements of Rights and Responsibilities about that sharing. These provisions basically boil down to two sets of choices.

The choice for Facebook: harm versus offence

The first set of choices are those for Facebook itself. As Facebook’s Reporting Guide flowchart shows most choices involve deciding whether content involves harm versus offence.

\"\"

The Community Standards page proposes that Facebook claims to step in to prevent harm. There is hard line against Violence and Threats as well as Self-Harm, with strictly negative evaluations the content and similarly strict prohibitions:

Violence and Threats: Safety is Facebook’s top priority. We remove content and may escalate to law enforcement when we perceive a genuine risk of physical harm, or a direct threat to public safety. You may not credibly threaten others, or organize acts of real-world violence. Organizations with a record of terrorist or violent criminal activity are not allowed to maintain a presence on our site. We also prohibit promoting, planning or celebrating any of your actions if they have, or could, result in financial harm to others, including theft and vandalism.

Self-Harm: Facebook takes threats of self-harm very seriously. We remove any promotion or encouragement of self-mutilation, eating disorders or hard drug abuse. We also work with suicide prevention agencies around the world to provide assistance for people in distress. [Emphasis added]

Compare these hard lines to the more nuanced lines for Bullying and Harassment and Hate Speech, both of which include some positive evaluation of certain content that may be offensive but is not directly harmful:

Bullying and Harassment: Facebook does not tolerate bullying or harassment. We allow users to speak freely on matters and people of public interest, but take action on all reports of abusive behavior directed at private individuals. Repeatedly targeting other users with unwanted friend requests or messages is a form of harassment.

rong>Hate Speech: Facebook does not permit hate speech, but distinguishes between serious and humorous speech. While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition. [Emphasis added]

No-one could pretend that these would be easy decisions, either in terms of content or process given Facebook’s enormous user base.

Facebook has four different sets of teams making these decisions across four countries (and time zones) and speaking 24 languages: the Safety Team, the Hate & Harassment team, the Abusive Content Team, and the Access team. Pity them.

Arguably these teams might be helped by technical solutions such as improved interstitial age restriction warnings, but there will always be a need for choices made by people.

The choice for users: the golden rule

For users, choice entails judicious application of the golden rule – do unto others as you would have done unto you. The Community Standards page proposes this concept with an emphasis that all users should have choices.

For example, on Graphic Content the Community standards page says:

Graphic Content: Facebook has long been a place where people turn to share their experiences and raise awareness about issues important to them. Sometimes, those experiences and issues involve graphic content that is of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, it is to condemn it. However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site.

When people share any content, we expect that they will share in a responsible manner. That includes choosing carefully the audience for the content. For graphic videos, people should warn their audience about the nature of the content in the video so that their audience can make an informed choice about whether to watch it. [Emphasis added]

The Nudity and Pornography standard is rather more coy (read “defensible and deniable”, perhaps):

Nudity and Pornography: Facebook has a strict policy against the sharing of pornographic content and any explicitly sexual content where a minor is involved. We also impose limitations on the display of nudity. We aspire to respect people’s right to share content of personal importance, whether those are photos of a sculpture like Michelangelo’s David or family photos of a child breastfeeding. [Emphasis added]

Nevertheless, these positions rhetorically support users\’ self-determination, or at least include a space for user choice in a collaborative decision-making system.

That being said, as this long page of Facebook criticism shows, Facebook is not always immediately receptive.

But to say that Facebook is not always immediately receptive is not the same as saying that it is not responsive at all. Indeed, there is another very important aspect of user choice built in to Facebook that has flown somewhat under the radar for all those arguing about what Facebook should do. This is Facebook’s Social Reporting system.

Social Reporting is an active system that allows users to ask trusted friends to help them resolve issues themselves rather than immediately resorting to Facebook’s formal system. This is quite different to ‘adhering’ to passive walls of text in the Community Standards and Terms of Service.

Social Reporting promotes user self-determination and community standards as the processes and thresholds for deciding on controversial content. This approach should be lauded over either corporate or governmental decision making, both of which will always err on the side of caution (and possibly repression).

Everyone’s responsibility

While there is much that could be criticised about Facebook in terms of privacy controversies, when it comes to controversial content decisions Facebook is, on balance, at least providing some mechanisms for action.

Still, Facebook relies on our confidence in order to monetise our content, and must continue to earn that confidence. We must be vigilant in holding Facebook to its word and constantly pushing for higher technical and social standards for accountable responsibility.

Tags: