Total Pageviews

What happens when free-speech engines like Twitter and Facebook become megaphones for violence?

Social networks and platforms like Facebook, Twitter and YouTube have given everyone a megaphone they can use to share their views with the world, but what happens — or what should happen — when their views are violent, racist and/or offensive? This is a dilemma that is only growing more intense, especially as militant and terrorist groups in places like Iraq use these platforms to spread messages of hate, including graphic imagery and calls to violence against specific groups of people. How much free speech is too much?

That debate flared up again following an opinion piece that appeared in the Washington Post, written by Ronan Farrow, an MSNBC host and former State Department staffer. In it, Farrow called on social networks like Twitter and Facebook to “do more to stop terrorists from inciting violence,” and argued that if these platforms screen for things like child porn, they should do the same for material that “drives ethnic conflict,” such as calls for violence from Abu Bakr al-Baghdadi, the leader of the Jihadist group known as ISIS.

“Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography. Many, including YouTube, use a similar technique to prevent copyrighted material from hitting the web. Why not, in those overt cases of beheading videos and calls for blood, employ a similar system?”

Free speech vs. hate speech — who wins?

In his piece, Farrow acknowledges that there are free-speech issues involved in what he’s suggesting, but argues that “those grey areas don't excuse a lack of enforcement against direct calls for murder.” And he draws a direct comparison — as others have — between what ISIS and other groups are doing and what happened in Rwanda in the mid-1990s, where the massacre of hundreds of thousands of Tutsis was driven in part by radio broadcasts calling for violence.

In fact, both Twitter and Facebook already do some of what Farrow wants them to do: for example, Twitter’s terms of use specifically forbid threats of violence, and the company has removed recent tweets from ISIS and blocked accounts in what appeared to be retaliation for the posting of beheading videos and other content (Twitter has a policy of not commenting on actions that it takes related to specific accounts, so we don’t know for sure why).

_75635188_isisnew

The hard part, however, is drawing a line between egregious threats of violence and political rhetoric, and/or picking sides in a specific conflict. As an unnamed executive at one of the social networks told Farrow: “One person's terrorist is another person's freedom fighter.”

In a response to Farrow’s piece, Jillian York — the director for international freedom of expression at the Electronic Frontier Foundation — argues that making an impassioned call for some kind of action by social networks is a lot easier than trying to sort out what specific content to remove. Maybe we could agree on beheading videos, but what about other types of rhetoric? And what about the journalistic value of having these groups posting information, which has become a crucial tool for fact-checking journalists like British blogger Brown Moses?

“It seemed pretty simple for Twitter to take down Al-Shabaab's account following the Westgate Mall massacre, because there was consistent glorification of violence… but they've clearly had a harder time determining whether to take down some of ISIS' accounts, because many of them simply don't incite violence. Like them or not… their function seems to be reporting on their land grabs, which does have a certain utility for reporters and other actors.”

Twitter and the free-speech party

As the debate over Farrow’s piece expanded on Twitter, sociologist Zeynep Tufekci — an expert in the impact of social-media on conflicts such as the Arab Spring revolutions in Egypt and the more recent demonstrations in Turkey — argued that even free-speech considerations have to be tempered by the potential for inciting actual violence against identifiable groups:

Free (and offensive) speech is a fundamental right but so is the right to live in peace, free from ethnic violence. Yes there's a trade-off.—
Zeynep Tufekci (@zeynep) July 14, 2014

It’s easy to sympathize with this viewpoint, especially after seeing some of terrible images coming out of Iraq. But at what point does protecting a specific group from theoretical acts of violence win out over the right to free speech? It’s not clear where to draw that line. When the militant Palestinian group Hamas made threats towards Israel during an attack on the Gaza Strip in 2012, should Twitter have blocked the account or removed the tweet? What about the tweets from the official account of the Israeli military that triggered those threats?

What makes this difficult for Twitter in particular is that the company has talked a lot about how it wants to be the “free-speech wing of the free-speech party,” and has fought for the rights of its users on a number of occasions, including an attempt to resist demands that it hand over information about French users who posted homophobic and anti-Semitic comments, and another case in which it tried to resist handing over information about supporters of WikiLeaks to the State Department.

Despite this, even Twitter has been caught between a rock and a hard place, with countries like Russia and Pakistan pressuring the company to remove accounts and use its “country withheld content” tool to block access to tweets that are deemed to be illegal — in some cases merely because they involve opinions that the authorities don’t want distributed. In other words, the company already engages in censorship, although it tries hard not to do so.

Who decides what content should disappear?

Facebook, meanwhile, routinely removes content and accounts for a variety of reasons, and has been criticized by many free-speech advocates and journalists — including Brown Moses — for making crucial evidence of chemical-weapon attacks in Syria vanish by deleting accounts, and for doing so without explanation. Google also removes content, such as the infamous “Innocence of Muslims” video, which sparked a similar debate about the risks of trying to hide inflammatory content.

Reply to @RonanFarrow Please answer my question first, but for yours: no, I don't want private corps deciding what political speech is OK.


Glenn Greenwald (@ggreenwald) July 11, 2014

What Farrow and others don’t address is the question of who should be left to make the decision about what content to delete in order to comply with his desire to banish violent imagery. Should we just leave it up to unnamed executives to remove whatever they wish, and to arrive at their own definitions of what is appropriate speech and what isn’t? Handing over such an important principle to the private sector — with virtually no transparency about their decision-making, nor any court of appeal — seems unwise, to put it mildly.

What if there were tools that we could use as individuals to remove or block certain types of content ourselves, the way Chrome extensions like HerpDerp do for YouTube comments? Would that make it better or worse? To be honest, I have no idea. What happens if we use these and other similar kinds of tools to forget a genocide? What I think is pretty clear is that handing over even more of that kind of decision making to faceless executives at Twitter and Facebook is not the right way to go, no matter how troubling that content might be.

Post and thumbnail images courtesy of Shutterstock / Aaron Amat

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.