Scoop has an Ethical Paywall
Licence needed for work use Learn More
Top Scoops

Book Reviews | Gordon Campbell | Scoop News | Wellington Scoop | Community Scoop | Search

 

Gordon Campbell on the policing of social media content

First published on Werewolf

social-media-imageAt the height of the Arab Spring less than a decade ago, social media platforms were hailed as bright new tools of liberation in the hands of the oppressed. In the wake of the Christchurch mosque attacks though, the pendulum has swung in the opposite direction. Social media outlets are being examined for their role in (a) radicalising the shooter (b) confirming him in his beliefs by linking him to an online community of like-minded racists and (c) enabling his horrifying actions to be livestreamed and (ultimately) shared with a large global audience. Has social media handed hate a megaphone? For better and for worse, the era of Google, Facebook etc being trusted to self regulate their content seems to be over. All the talk is now is about government regulation.

As PM Jacinda Ardern noted at her post-Cabinet press conference on Monday, many countries – some, even before the Christchurch attacks – have unveiled ways in which they intend to regulate social media outlets and ensure the removal of harmful material under threat of fines, or even imprisonment, if need be. Ireland, France, Germany and the UK have all tackled this problem but, as Ardern noted, each of their approaches has been somewhat different. Her inclination is to pursue an international consensus on the issue, and to encourage countries to speak in unison.

Advertisement - scroll to continue reading

That approach sounds reasonable enough, but seems only superficially plausible. Hard questions immediately arise. In what forum could such a consensus be pursued – someone at Monday’s press conference tentatively suggested UNESCO – and even if global agreement on what constitutes harmful content could be reached… under whose jurisdiction could the perpetrators be prosecuted? The United States is notoriously averse to having its courts and its citizenry subject to verdicts by foreign tribunals – and a global consensus on social media policing without the United States would always be a bit beside the point.

Adroitly, Facebook’s Mark Zuckerberg has not tried to argue against the current tide of outrage, but – in shades of the rope-a dope tactics perfected by Muhammad Ali – he appears to be gambling that this outrage will probably wear itself out. Right now though, if countries want to regulate social media, Zuckerberg is saying that he isn’t going to stand in their way:

“He is putting it up to governments and parliaments to set the rules, not him,” says James Lawless, one of three Irish lawmakers who met with Zuckerberg in Dublin on April 2, just days after the Facebook boss urged countries to take a more proactive role in regulating social media. “He said he is uncomfortable with Facebook making decisions on content and issues of this type.”

You bet he is. His attitude seems to be: if they’re so inclined, let the politicians regulate, and let them face the Big Brother backlash when the public discovers just how hard (and infuriating) it will be when governments try to devise and to operate an effective censorship system on the Net. By waving governments on ahead, Facebook is reckoning that this approach would have the virtue of minimising the damage in the meantime to its own image. Better to be able to say to its users: “Hey, these guys are really clueless, but they’re making us do it.”

One big question is whether a government setting rules around free speech is more palatable than a private company doing so. And even if such a system flies in the U.K., it might be difficult to export to countries with different cultural norms around speech.

“Difficult” is one way of putting it. That's another problem with the ‘global consensus’ approach being advocated by Ardern. In the UK and in many other parts of the world, the state would be more than willing to prohibit content that it deemed harmful, and offensive to its sensibilities. Such states would also relish having the power to force multinational companies to hand over the keys to encrypted material and to punish anyone who disagreed online with its thoughts on such matters.

zuckerberg-rowsModeration in all things

For all its many failings, Facebook has more experience than anyone else with the problems involved in trying to balance content regulation with the rights to freedom of expression. What we know about the process of moderating online content is that it can be very harmful to the underpaid people doing the donkey work of watching a lot of vile material online all day, and then making judgement calls about it.

The basis of those judgements have been enshrined in a rulebook that the Pro Publica website got hold of a couple of years ago. To convey the nuances – and absurdities – of the Facebook moderating manual, Pro Publica began its account with this chilling example:

In the wake of a terrorist attack in London earlier [in 2017] ,a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.” Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

However as Pro Publica noted, a Facebook posting by Boston poet and Black Lives Matter activist Didi Delgado drew a different response:

“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.

How on earth could that happen? It is a product of the way the Facebook moderating rules are constructed. Certain social groupings are “protected categories” – and the list includes race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. Delgado saying “all white people are racist” breaks that rule.

However – and this is crucial – when it comes to the subsets of those protected categories, almost anything goes. Rep Higgins only wanted to kill “radicalised Muslims” not all Muslims, so that, Facebook deemed, passed the protection test. Similarly, you can’t denigrate “women” as such but “women drivers” are fair game. You can’t denigrate “blacks” per se, but Facebook users can say what they like about “black teenagers” and allege that they’re coming after your racially pure white daughters. Migrants per se cannot be described on Facebook as “ filth” but the sub-category of “ filthy migrants” is allowed. In that case it depended entirely on “whether the comparison was in the noun form.” And so on.

Interestingly, Pro Publica also found a slide in its Facebook moderating trove that posed the educational question to the moderators as to which of these examples belonged in the class of protected categories: women drivers, black children, or white men? The correct answer is “white men”. When it comes to permitted political expression, the bias at work is just as evident. Forget the glory days of the Arab Spring. These days, Facebook is right onside with the status quo.

[Facebook moderators] have banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.

The elephant in the room for the Facebook moderating staff is of course, the President of the United States. Repeatedly, Donald Trump denigrates vulnerable groups (and his political foes) in terms that have apparently inspired violent actions by his inflamed fans. In the past week, Trump supporter Patrick Carlineo has been charged with threatening to kill Muslim Congresswoman Ilhan Omar. Reportedly, Carlineo told the FBI "that he was a patriot, that he loves the President, and that he hates radical Muslims in our government."

Similarly, the white racist called Patrick Bowers who killed 11 people in a Pittsburgh synagogue last year left an online trail of mounting rage where he seemed to have been tipped over the edge by the President’s inflammatory comments on Fox News about the Central American migrant caravan. On Monday, the Pittsburgh media carried a report that directly compares Bowers and the Christchurch shooter, and the near identical role that social media (and hate-filled messenger boards like Gab and 8 Chan) had played in the path to radicalisation taken by both men.

"Both killers announced in their preferred internet forums that they were about to commit violence and seemed to identify their fellow forum participants as community members who might share their propensity to commit violence," the study stated. "Both killers were consumed by the conspiracy of a 'white genocide.' Both Gab and 8chan – the go-to forums for Robert Bowers and Brenton Tarrant, respectfully – are rife with white supremacist, hateful, anti-Semitic bigotry.” The study's research explored how each killer identified with his online community, with a degree of adherence comparable to ISIS followers, and illustrated that same language is used constantly on both platforms.

In a further Christchurch/Pittsburgh parallel, the mayor of Pittsburgh has responded to the synagogue attack by imposing a ban on semi-automatic weapons within the city limits.

It being America, this ban is unlikely to survive a challenge in the courts:

The legislation bans the use of assault-style weapons in public places and permitting courts to authorize the temporary seizure of guns from anyone determined to be dangerous to themselves or others. Allegheny County District Attorney Stephen Zappala does not believe that Pittsburgh has the authority to restrict certain types of weapons. ammunition and firearm accessories within city limits.

Reportedly, the NRA is already “ helping” Pittsburgh residents to file an appeal to overturn the mayoral ban.

facebook_lock_wordpressThe state steps in

Over the past year or so, Mark Zuckerberg has proven himself a Zen master at deflecting criticism with conciliatory gestures, yet without changing course in any significant fashion. Facebook, for instance, has publicly argued in its own defence that very few people (200, tops!) actually saw the livestreaming of the Christchurch attack as it happened. Moreover, the company also managed to get Jacinda Ardern to repeat its hype about the numbers of mosque attack videos it had dutifully taken down from its platform.

What a joke. In the course of this week’s shambolic US House judiciary committee hearings on “hate crimes and white nationalism” one of the very few bright spots came when Georgia Congressman Hank Johnson argued that for all of Facebook’s posturing, many copies of the mosque attack video have circulated freely on Facebook’s Whatsapp encrypted messaging service – which, by design, cannot track or prevent exchanges of hate speech material. The low point of the House hearings? Maybe when one conservative witness described the Christchurch shooter as being a “left wing terrorist”.

Quite by accident, the House hearings this week also illustrated the virulence of the problem. Youtube may have tried to win some brownie points by livestreaming this serious government investigation into the threat posed by white nationalism, but the Youtube comments sections below it quickly turned into a hotbed of the same white racism (and race-mixing paranoia) that the House was pondering what it should do about. “Camel-faced Jew” was one of the least of the comments leveled at one of the witnesses. Eventually Youtube had to step in and disable the comments section entirely.

As Ardern indicated on Monday, other countries have decided to pass their own domestic legislation against offensive online content. Germany passed its Network Enforcement Act into law way back in October 2017, and you can read it here. A criticism by Human Rights Watch of that legislative effort by Germany can be read here.

Australia’s much more recent effort can be read here. Unfortunately, as the Techdirt site has indicated, this wasn’t the finest moment in Australian parliamentary history, since the social media law was bundled in with 18 other bills and passed in a Senate session lasting 45 minutes with only one Senator asking to see the text of the social media bill, which – in any case – wasn’t available for scrutiny before being passed into law. As for the content…

The bill demands the removal of "objectionable" content within a "reasonable amount of time." "Reasonable" isn't defined. The bill simply demands "expeditious removal" after notification and an initial fine of $168,000 for not being expeditious enough. There's no legal definition of "expeditious" to rely on, so social media providers will apparently have to make do with the Attorney General's feelings.

[A]ttorney-General Christian Porter gave some indication during a televised briefing of how quickly individuals and companies might have to act.

“Using the Christchurch [massacre] example, I can't precisely say what would have been the point of time at which it would have been reasonable for [Facebook] to understand that this was live streaming on their site or playable on their site, and they should have removed it,” Porter said.

So, tech companies have an hour to remove anything the Australian government claims is abhorrent, whether or not the content was uploaded by an Australian. If this vague deadline isn't met, the fines begin escalating. $168,000 is merely the starting point. Individuals can be hit with a three-year jail term, up to $2.1 million in fines, or both. Companies meanwhile can face fines up to $10.5 million or 10 percent of their annual turnover.

Alas, the UK White Paper on the subject – released on April 9 – wasn’t any better. Key terms (eg “ harmful”) were not defined, and in the accompanying video British PM Theresa May indicated just how widely the legislative net could be cast beyond the threat posed by white nationalism, with fines attached to “online harms” of virtually any sort. Apparently, trolling (also undefined) would be among the unacceptable “harms” that May has in mind. Social media would also face an obligation to promote a “fairness doctrine” online whereby competing views are aired and shared equally. Oh, and people will have to stop saying mean and hurtful things to people in public life. I’m not kidding. Here’s how May herself put it:

‘As set out in Box 14, those involved in public life in the UK experience regular and sustained abuse online, which goes beyond free speech and impedes individuals’ rights to participate. As well as being upsetting and frightening for the individual involved, this abuse corrodes our democratic values and dissuades good people from entering public life.’

The full absurdity of the exercise intended by the UK government is outlined here.

With these two train wrecks as precedents, no wonder that Ardern is politely declining to launch her own domestic legislation on the policing of Net content. For now, she has kicked for touch by urging the whole world to reach a consensus (somehow, somewhere) on the subject.

Footnote One: You may be wondering how the Facebook moderators deal with Donald Trump’s constant rounds of denigration of women, Mexicans, Muslims, migrants and people with disabilities:

The documents reviewed by ProPublica indicate, for example, that Donald Trump’s posts about his campaign proposal to ban Muslim immigration to the United States violated the company’s written policies against “calls for exclusion” of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump’s statements from its policies at the order of Mark Zuckerberg, the company’s founder and chief executive.

Footnote Two: Geo-blocking, whereby Youtube blocks certain territories from access to content particularly offensive within those countries– the classic example is Thailand, where criticising the monarch is off limits – is not applicable to global networks like Facebook and Twitter. What Youtube could also do though, is to tweak the algorithms that link its viewers (via suggestions) to deeper and darker expressions of white supremacist content. (You liked the mosque attack video? Well, try this.) Algorithms so sensitive to users’ online purchasing choices can surely be made equally sensitive to white nationalist content, and shut down the recommendations, at entry.

© Scoop Media

 
 
 
Top Scoops Headlines

 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.