While its easy to deride the Christchurch Call To Action as a set of well meaning platitudes – which it is – the zigzagging nature of its content (“steps to eliminate terrorist and violent extremist content online” on one hand… and “must be consistent with a free, open and secure Internet, without compromising human rights and fundamental freedoms, including freedom of expression” on the other) reflects the genuine tension that exists on this subject.
After sifting through this ahem, Christchurch Call manifesto… it is easy to see why Facebook, Twitter etc prefer this voluntary, pain free, collaborative-with-government approach
to the path of online reform. The pro-competition passages that talk about ‘capacity building aimed at smaller online
service providers” and that “support smaller platforms as they build capacity to remove terrorist and violent extremist
content” will also help CEO Mark Zuckerberg to fend off the recent round of anti-trust calls to break Facebook up into
competing units.
While the US did not take part in the Christchurch Call meeting – and the White House has already explicitly dismissed
its approach to countering online extremism – there are some implications for the powers and practices of the US tech
giants. For example: the priority that the Christchurch Call gives to enhanced moderation is entirely consistent with
retaining the current US legal framework ; most notably, the much-reviled Section 230 provisions that protect tech
companies from legal liability for the takedowns and bans they enact.
Very little of what the Christchurch Call advocates – including the retention of a free and open Internet and the
building of capacity by Facebook’s smaller rivals behind a shield against liability – would be possible without the
umbrella protection that Section 230 provides. Although widely misunderstood as a free pass for tech companies to avoid
the task of moderation entirely, Section 230 is in fact the legal bedrock for what the Christchurch Call is advocating.
Regulating Extremism, Nicely
So besides fostering online competition and shoring up Net freedoms against the US pressure to politicize the moderating
process (by giving more latitude to the right wing extremists that the US President loves to retweet) what does the Christchurch Call propose
to do about hate speech and online extremism?
Well, it talks about “effective notice and takedown procedures.” And in a nod to those aggrieved US political
conservatives, it also zigzags by acknowledging a need to “provide an efficient complaints and appeals process for those
wishing to contest the removal of their content, or a decision to decline the upload of their content”. Meaning: if the
extremist likes of Alex Jones become subject to notice and takedown, they will also have formal rights of appeal. (Given
the likely deluge of applicants, that appeal body better start hiring right now.)
How an independent notice/takedown and appeal system would be set up (and under what terms and conditions it would
operate) is not spelled out in the Christchurch Call. Elsewhere, Zuckerberg has signaled Facebook’s readiness to
consider such a system, though. In the meantime, an onus will be placed on tech companies to regularly report on their
progress in removing extremist content. For now though, the Christchurch Call merely “encourages” media companies “to
apply ethical standards when depicting terrorist events online” and to “consider” appropriate action to prevent the
dissemination online, of terrorist or violent extremist content.
Algorithms At Bay
On the even thornier issue of managing the online algorithms, the Christchurch Call kicks for touch once again, via
extensive use of the subjunctive. There will a “review” of algorithms that “may” drive users “towards and/or amplify
terrorist and violent extremist content” – but not so much, it seems, as to justify the removal or curtailment of such
algorithms. Hey, these algorithms may even help us all to “to better understand possible intervention points and to
implement changes where this occurs”. Peachy.
Again the review process “may” include “using algorithms and other processes to re-direct users from such content or the
promotion of credible, positive alternatives or counter-narratives”. Meaning: you liked that racist Lauren Southern
video? Well in future, instead of having the Youtube algorithm then send you deeper in the direction of some neo-Nazi
hell gathering, a Noam Chomsky video might pop up on your feed in its place. And vice versa. You liked Noam C? Then try
this Jordan Peterson video, for a more balanced online diet.
The process envisaged by the Christchurch Call also “may” include “building appropriate mechanisms for reporting,
designed in a multi-stakeholder process, and without compromising trade secrets or the effectiveness of service
providers’ practices through unnecessary disclosure.” Hmmm. Stake-holder driven, amid a disclosure process where no
trade secrets or operational practices are to be revealed? Good luck to the independent online watchdog in trying to
drive around those roadblocks.
At the heart of this issue of course, is the lack of consensus in an increasingly polarized world as to what constitutes
unacceptably extremist content. For their part, authoritarian regimes would welcome any regulatory tools that enable
them to identify, crackdown on and punish online dissent. Even what may sound like a relatively narrow focus by the
Christchurch Call document on “terrorist or violent extremist content” could still be defined quite widely by regimes
that felt so inclined. Like the famous non-definition of pornography, the Christchurch Call just tends to assume that
we’ll instinctively know the content that encourages terrorism and extremism, when we see it.
Regardless of these drawbacks, you could still argue that the well-meaning generalities contained in the Christchurch
Call document are superior to the attempts made thus far by various countries (Australia, the UK, India) to define what
constitutes the offensive material in question. For a hideous example: one provision of India’s draft legislation to
combat fake news would require tech companies to police and automatically filter out any content deemed to be
“blasphemous, defamatory, obscene, pornographic, paedophilic, libellous, invasive of another’s privacy, hateful,
racially or ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise
unlawful in any manner whatever.” Another draft rule would require tech companies to break end-to-end encryption if
asked to do so by authorities keen to trace the source of objectionable content.
Encryption is one hot potato that the Christchurch Call document chose not to handle at all – even though the encrypted
WhatsApp messaging service (owned by Facebook) quickly became a vehicle for disseminating the mosque attack video
simultaneously being taken down by the Facebook moderators. Encryption exposes the tension between the valid privacy
concerns, and the keen interest that governments/Police/intelligence agencies always have in expanding their tools of
mass surveillance. Alas, it would become an issue for any genuine notice and takedown system.
So… there was a lot that the Christchurch Call did not address, beyond its good intentions. It didn’t address encryption
or establish an independent notice and take down and monitoring body. Crucially, it didn’t call for the abolition of
Facebook’s livestreaming service that provides a global platform for politically motivated murder. Prior to the Paris
meeting, PM Jacinda Ardern even welcomed Facebook’s weak alternative of (briefly) denying access to its platform for
those users who circulate the likes of the mosque attack video.
The Call document also didn’t condemn the way the algorithms currently push users towards the extremist content that’s supposedly at issue. Or address how the algorithms serve to intensify the appetites for such material:
The problems of Youtube's recommender algorithms might be that they overdistil your preferences. Since they're aiming for "engagement"… the real problem with these algorithms is they're constantly
aiming to create an epic sense of drama and newness. At the tail-ends of this bimodal attentional landscape, only the
Xtreme can survive. And of course, this precisely leverages our novelty-seeking psychology, which really does snap to
attention when we're presented with intense stuff. So it's not that Youtube radicalize politics specifically. It
radicalizes everything….
Finally, the Christchurch Call didn’t call out the seemingly benign “celebration” and “memories’ functions of the
Youtube algorithms, that can also have perverse consequences:
According to a five-month-long, 3,000-page National Whistleblowers Center study of terror groups on Facebook, the
celebration/memories algorithm is auto-generating anthology pages that celebrate and frame the most effective terror
messages created by extremists, giving them much-needed help in promoting their message to their base.
All of this could theoretically come under the “review” that the Christchurch Call hopes to launch. That’s the problem,
though. It issues a “call” to action and few would oppose its humane goals. (Like apple pie, what’s not to like?) But
there’s disappointingly little in the way of content, or focus. As a map to a destination, it reminded me a bit of that
old Stephen Leacock line: “He flung himself from the room, flung himself upon his horse, and rode madly off in all
directions.” Its just gestural politics, at this point.
Lizzo, and Those Darned Algorithms (Again)
With “Truth Hurts” Lizzo finally seems to have the hit track fit for the persona…Recently, Matthew Perpetua struggled to
express the unease he felt about the rise of Lizzo and the “cultural cartography” of which she is the perfect expression:
…Lizzo’s music is perfectly engineered to the point that it can seem like it’s already gone through extensive A/B
testing and optimization. It’s glossy and immediately accessible, but signals some degree of authenticity and
soulfulness. It’s aggressively sincere and every song is clearly about a particular statement or relatable situation.
It’s all geared towards feelings of empowerment, and given how many ads, shows, and movies want to sell that feeling,
her songs are extremely effective and valuable, especially since up until recently she was not famous and thus not
weighed down in the cultural baggage of celebrity.
I can’t hear Lizzo’s music without recognizing her cultural cartography savvy. A lot of music can achieve these goals
without contrivance, often just as a natural side effect of an artist intuitively making resonant work, but Lizzo’s
songs all sound very calculated to me. This is not such a bad thing ….Lizzo has a good voice, and her songs range from
“pretty good” to “undeniable banger” but I have mixed feelings about all of it because I know the game being played
rather well, and because I’m uncomfortable with this self-consciously audience-pleasing approach to content creation
becoming the primary mode of pop culture. I appreciate the value of empowering art – and as someone who has spent his
entire adult life as a fat man, I am particularly sympathetic to Lizzo’s fat-positivity – but fear mainstream culture
further devolving into nothing but shallow exclamations of self-affirmation. We’re more than halfway there already,
especially when you factor in YouTube.
This music makes me want to rebel against it. I never ask that any music be “for me” – I prefer art to offer a window
into other lives and ways of thinking – but Lizzo’s songs are often so transparent in their intended use as a way for
square, insecure people to feel empowered and cool that I can’t help but hear it and think “but I don’t actually want or
need this!” She reminds me a lot of Macklemore, whose big hits “Same Love” and “Thrift Shop” had a similar quasi-cool
accessibility and cultural cartography value at the time. In both cases, making fun of them feels cheap and churlish, or
like a sideways attack on fat women, LGBT rights, or uh, value shopping. But for me, it’s really just developing an
allergy. I hear too much of music like this, or see too many shows and movies that are obviously designed with cultural
cartography in mind, and I just run screaming back towards artsy ambiguity.
True, every word. But also hilarious as an example of the allergic response we all now have against being played, and
having our cultural buttons pushed gratuitously. Its instinctive, but is a barrier to being in the moment, nonetheless.
With such caveats in mind, here’s “Truth Hurts”: