Gordon Campbell on Ardern and Macron’s campaign against violent social media content
Should tech companies be liable for the content they carry on their digital platforms? No more so surely, than a library
should be held responsible for what readers do with the books on its shelves. Yet in the wake of the Christchurch mosque
shootings, there is now a strong political appetite for some sort of regulatory action. Arguably, there is a risk that the cure for anti-social online content could end up being almost
as bad as the disease.
This week, PM Jacinda Ardern and French president Emmanuel Macron announced they will co-host a summit meeting in Paris
on May 15, at which other heads of state and the CEOs of major tech companies will be asked to commit to a statement
named the “Christchurch Call” – which will require them to take all practical steps to combat violent extremism and terrorism on social media platforms.
The recent bombings by Muslim activists of three Christian churches and five other locations in Sri Lanka (including
tourist hotels and a zoo) have added to calls for the global regulation of Internet content. Christians comprise only
about 7% of the Sri Lankan population. Almost exactly a year ago, the NYT carried a story about how social media
platforms were fostering communal violence in Sri Lanka between the 70 % Buddhist majority and the slightly less than 10% Muslim minority on the island :
A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users
caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumour to
killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to
hire moderators or establish emergency points of contact.
The role that social media platforms play in the developing world has received relatively little attention, compared to
the focus on the First World’s online trends and outcomes. Perceptions have changed dramatically over the past decade.
During the Arab Spring and the 2009 Green Revolution in Iran, Facebook, Twitter and other platforms were seen as playing
an entirely positive role in the mobilising of demonstrations by the oppressed.
Subsequently, those platforms left behind digital footprints that enabled the same oppressive regimes to track down,
imprison, torture and/or execute many of the activists involved. In the West – and Christchurch aside - the negative
consequences of online news content are not usually as serious. Again, that 2017 NYT article noted the difference :
In the Western countries for which Facebook was designed, this leads to online arguments, angry identity
politics and polarization. But in developing countries, Facebook is often perceived as synonymous with the Internet and reputable sources are scarce, allowing emotionally charged rumours to run rampant.
Shared among trusted friends and family members, they can become conventional wisdom.
And where people do not feel they can rely on the police or courts to keep them safe, research shows, panic over a perceived threat can lead some to take matters into their own hands….[In 2017] in rural Indonesia,
rumours spread on Facebook and WhatsApp, a Facebook-owned messaging tool, that gangs were kidnapping local children and
selling their organs. Some messages included photos of dismembered bodies or fake police fliers. Almost immediately, locals in nine villages lynched outsiders they suspected of coming for their children. Near-identical social media rumours have also led to attacks in India and Mexico….
Only this week, two Saudi women who fled from their abusive families in Saudi Arabia, complained to the Guardian newspaper about a multi-purpose app called Absher that’s sold at Google and Apple online stores. Among its legitimate
features, the Absher app reportedly enables Saudi men to track the whereabouts of the women they control under the country's oppressive male guardianship system :
Absher, which is available in the Saudi version of Google and Apple online stores, allows men to update or withdraw
permissions for female relatives to travel abroad and to get SMS updates if their passports are used, according to
researchers…. “It gives men control over women,” said Wafa, 25. “They have to remove it,” she added, referring to Google
and Apple.
This example highlights one of the problems with one size-fits- all global solutions. Platforms, functions and apps that
may be benign in one context can be malevolent in others. From the outset, Ardern has indicated that since harmful
digital content is a global problem, this issue cannot be resolved by individual countries passing their own domestic
legislation. Ever since the mosque attacks took place, she has consistently argued for a global consensus between
countries, via a constructive engagement with the major tech companies. Yet given the cultural variations and differing
sensitivities in various parts of the world, any global resolutions arrived at next month in Paris are likely to be
either Western-centric, or too general to be useful.
Paris and Beyond
Little wonder though, that Facebook, Google etc seem willing to show up at the Ardern /Macron gathering in Paris.
Hitherto, the tech companies have enjoyed two decades of virtual self-regulation. Now that externally imposed regulation
is very much in the political wind,
Facebook CEO Mark Zuckerberg has been signalling his preference that any such regulation should be devised and imposed
by governments, rather than by the tech
companies themselves – presumably, so that the politicians then get to wear the backlash once people realise the full
implications of allowing the state to define and police the content deemed acceptable on the Net.
Thankfully for everyone, there is some low –hanging fruit to be plucked here. Some of the old-school white supremacist
websites have already shut down, as the racists adopt more sophisticated forms of messaging for their hateful agenda.
The Facebook Live service – which enabled the Christchurch shooter to livestream his actions (and then have them
disseminated by an encrypted Facebook messenger service) will be one of the main targets of Ardern and Macron in Paris.
Zuckerberg has also already signalled that Facebook will create an independent oversight body to adjudicate appeals on
content moderation issues. That concession is likely to be recycled for a headline or two in Paris. Facebook’s clear
preference is for after-the-fact takedowns, rather than pro-active moderation. That makes sense. Zuckerberg has
signalled that any hate speech content or white nationalist propaganda that does make it onto the Facebook platform in
future will be subject to a ‘notice and takedown’ complaint system that – presumably - will operate very much like the
imperfect system that currently monitors copyright complaints online. Like the copyright ‘notice and takedown’ system,
such procedures are likely to prove cumbersome, and would be similarly open to being gamed by those with an axe to grind
against online political expression they don’t like.
Once you get beyond those low hanging fruit….it becomes difficult to censor online content without doing real damage to
freedom of expression, and to genuine political dissent. It would be unfortunate if the best friends of the
Ardern/Macron initiatives turn out to be the tyrants in countries that would (a) dearly love to see tech companies
forced to hand over the keys to encryption, and (b) would readily embrace further restrictions being put on the online
content their dissidents are allowed to post.
Taking on the US
Since Americans own and run the world’s main social media companies, any global solutions proposed in Paris next month
will need to bring the US on board. Those solutions would be difficult to enforce. The US is notoriously reluctant to
allow foreign tribunals to impose legal penalties on US citizens or companies.
As this column has indicated before, the piece of US legislation that safeguards Internet freedom of expression – and
the Internet’s unique ability to disseminate both hate speech and legitimate dissent – is a tiny clause that’s commonly
called Section 230 of the Communications Decency Act of 1996. In its entirety, it says this :
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information
provided by another information content provider.
US law professor Jeff Kosseff has just written an entire book about this clause and (with good reason) that book is
called The Twenty Six Words That Created The Internet. Crucially, it recognises that online platforms do not and cannot, pre-moderate all or even most of the content that they
carry but do not create. (For one thing, there’s just far too much of it being uploaded every hour of every week.)
Demanding that all of it should be pre-moderated for tone and content would radically change the nature of the Internet.
Unfortunately, there are some US politicians who would like to do just that, and who are seeking to significantly change
Section 230. Conservative Democrats such as House Speaker Nancy Pelosi and conservative Republican Senator Ted Cruz are almost in unison in their recent criticisms of Section 230. Their
underlying motives may differ, but that is almost beside the point. Pelosi says she wants Congress to limit hate speech
by making the platforms more liable for what they carry. Cruz, and US President Donald Trump also want to render the
platforms liable – but mainly in order to inject a “fair” balance (ie pro-Republican messaging) into any and all the
content they carry during the run-up to the US elections next year.
The only path to that goal of turning the Internet into something that looks more like Fox News would require demolishing the ‘safe harbour’
protections currently afforded by Section 230. It would be unfortunate if the mosque attacks (and the related desire to
prevent the spread of white nationalist messaging) were to be enlisted in this Pelosi/Republican offensive.
To that end however, Trump met this week with Twitter CEO Jack Dorsey to discuss Twitter’s allegedly discriminatory treatment of the White House :
Earlier in the day, Trump had tweeted a complaint that Twitter is "very discriminatory" and does not "treat me well as a
Republican…."In a statement, a Twitter spokesperson said, "Jack had a constructive meeting with the President of the
United States today at the president's invitation. They discussed Twitter's commitment to protecting the health of the
public conversation ahead of the 2020 U.S. elections…
So…whatever else it may be, getting more control over social media platforms is part and parcel of the Republicans’ 2020
election strategy. Reality, as Stephen Colbert once joked, has a well-known liberal bias. Republicans in the US and
Tories in the UK would like to change that situation by regulating social media content. Last week, UK PM Theresa May
unveiled her own thoughts on the subject. Here’s a useful link to what May has in mind.
Incredibly, May’s proposals would include making social media companies be pro-active (somehow) in stopping people from
saying mean things online about people in public office, (somehow) eliminating fake news and disinformation, and
(somehow) promoting a ‘ fairness’ doctrine that would incorporates diverse opinions into everything that social media
platforms carry.
As an aside, its worth noting that an Internet without Section 230 immunity would not erode the power of Google,
Facebook, Twitter and the other tech giants. It would actually re-inforce their market dominance. That’s because small
startups would be unable to afford the resources to pre-moderate the content they carry, or to meet the risk liability
if online platforms were made responsible for the tone, balance, and negative uses to which their content could
conceivably be put.
The Likely Paris Outcomes
So given all this…what can we expect from the Ardern/Macron gathering in Paris ? As mentioned, the best we can hope for
is a few headline gains – Facebook Live canned or restricted, Zuckerberg recycling his already promised ‘independent’
notice and takedown panel for violent online content, Google agreeing to stop its algorithms from recommending links to
ever-more extreme content. What would be a disastrous outcome? That would be if the participants signed off a
requirement that social media platforms operate a comprehensive system of pre-moderating the content they carry. That’s
unlikely - but lets keep our fingers crossed just in case.
Footnote One : Want a good local example of the impracticality of an extensive form of preventive moderation of online content ? You
are probably reading this column on Werewolf, which is published on the Scoop platform. Every year, Scoop also publishes
close on a million New Zealand press releases issued by all and sundry. In that respect, Scoop functions as a national
community noticeboard. It rejects press releases that contain libels and/or socially inflammatory hate speech. Imagine
though, if Scoop was required to pre-check every one of those press releases for accuracy, balance and for whether or
not they might hurt the feelings of people in public office. It would not be remotely practical or affordable for Scoop
to do so - and its efforts would be gamed by those with malice in mind against the organisations issuing the press
releases in question.
Footnote Two: All very well for Ardern to demand that social media platforms assume more responsibility for moderating what they
carry. Yet there is plenty of evidence that moderating online content is really bad for the mental health of the people who do the moderation work for Facebook.
If we want a whole lot more pro-active moderation, who do we expect to do it – and what practical steps can governments
usefully take to improve the pay and conditions of the workers concerned? Because to date, Zuckerberg hasn’t seemed very
interested in improving the work conditions in this part of his empire.
Footnote Three : Section 230 is actually not a monolith offering blanket legal protection to all US online platforms. In 2017, and as a
result of the prosecution of a child sex trafficking business carried out via a social media platform called Backpage,
Section 230 immunity was amended significantly by Congress. The details of the subsequent Fosta-Sesta legislation that put Backpage out of the child sex trafficking business can be found here.
and also here.
BTW, Any Internet legal nerds interested in tracing the 50 year evolution of US censorship law that culminated in the
Section 230 online ‘safe harbour’ protections can find a great podcast discussion between Techdirt’s Mike Masnick and US
law professor Jeff Kosseff on that subject right here.