Government should be cautious about moving to new law for “deepfake” audio and video, a new Law Foundation-backed study
released today says.
Artificial intelligence techniques can create massive volumes of fake audio, images and video that is incredibly
convincing and near-impossible to detect.
Co-author Tom Barraclough predicts that deepfake and other synthetic media will be the next wave of content causing
concern to government and tech companies following the Christchurch Call. While it is tempting to respond with new law,
the study finds that the long list of current legislation covering the issues may be sufficient.
“At least 16 current acts and formal guidelines touch on the potential harms of synthetic media (see note 1 below),” Tom
says. “Before calling for new law or new regulators, let's work out what we've already got, and why existing law is or
isn’t working.”
“Enforcing the existing law will be difficult enough, and it is not clear that any new law would be able to do better.
Overseas attempts to draft law for deepfakes have been seriously criticised.”
The report follows a nine-month legal research project by Tom Barraclough and Curtis Barnes at Brainbox Limited, funded
by the New Zealand Law Foundation. Their findings echo positions adopted by civil society organisations at the
Christchurch Call.
The researchers say their work is the first step in a more specific analysis of how New Zealand law applies to synthetic
media and other online content.
“Even if we can all agree something must be done, the next step is agreeing on specifics. That can’t be done without
close analysis of the status quo,” Tom says.
“Calling for a kind of social media regulator is fine, but these suggestions need substance. What standards will the
regulator apply? Would existing agencies do a better job? What does it mean specifically to say a company has a duty of
care? The law has to give any Court or regulator some guidance.”
“Further, we must ask what private companies can do that governments can’t. We have to consider access to justice:
often a quick solution is more important than a perfect one. Social media companies can restrict content in ways that
Governments can’t: is that something we should encourage or restrict?”
The researchers point out that fake video is not inherently bad: synthetic media technologies are a key strength of New
Zealand’s creative industries, and these should not be stifled. But there are many harmful uses that do need to be
curtailed, including the creation of non-consensual pornography and using synthetic speech to produce false recordings
of public figures.
“‘Fakeness’ is a slippery concept and can be hard to define, making regulation and automatic detection of fake media
very difficult,” Tom says. “Some have said that synthetic media technologies mean that seeing is no longer believing, or
that reality itself is under threat. We don’t agree with that. Harms can come from both too much scepticism as well as
not enough.”
Brainbox is a research company and think-tank focused on the intersection of law, policy and emerging technologies. The
Perception Inception project was funded by the New Zealand Law Foundation’s Information Law and Policy Project, with
additional support from the New Zealand Law Foundation Centre for Law and Policy in Emerging Technologies at the
University of Otago.