As investment in generative AI continues to grow globally, New Zealand’s government has been implementing its use across the public sector and encouraging businesses to embrace the technology’s potential.
But the environmental, social and governance risks and costs of AI remain under-investigated.
In particular, the energy-hungry computations generative AI requires mean an ever-expanding carbon footprint for this technology at a time when countries are expected to make more ambitious commitments to cut emissions at the upcoming United Nations climate summit (COP29) next month.
We argue that questions around AI’s environmental and social impact need to be part of the conversation about the role it should play in New Zealand society.
Societal costs and risks of AI
Currently, the global use of AI gobbles up as much energy as a small country. This rate is expected to double by 2026. Increasingly sophisticated AI is also projected to double the number of data centres in the next four years.
This ever-expanding reliance on data centres brings sustainability worries. The average data centre already uses about 40% of its power for cooling, often relying on local water supplies.
AI also brings social risks to employees and users. Its capabilities may result in job displacement and the wellbeing of staff who train AI could be affected if they are repeatedly exposed to harmful content.
Governance pitfalls include concerns about data privacy, breaches of copyright laws and AI hallucinations. The latter refers to outputs that sound right but are incorrect or irrelevant, but nevertheless affect decision making.
Why this matters for New Zealand
Like other countries, Aotearoa is rapidly adopting generative AI, from business to the courts, education and the work of government itself.
A recently announced collaboration between Microsoft and Spark Business Group means New Zealand will enter the hyperscale data centre trend. Hyperscale data centres allow for vast data processing and storage needs.
Once completed, a new hyperscale cloud region promises to enable New Zealand businesses to scale up locally, all powered by 100% carbon-free energy provided through an agreement with Ecotricity.
Data centres already use about 40% of their power for cooling. Getty Images
For the moment, many of the environmental and social costs of New Zealand’s growing use are being borne elsewhere. The issue for New Zealand right now is one of global entanglement. Asking ChatGPT a question in New Zealand means relying on overseas data centres, using a lot of electricity from their municipal grid and likely their water for cooling.
Data centres are scattered across the world and many are located in developing countries. Even where data centres use renewable electricity sources, this diverts supply from other priorities, such as the electrification of public transport.
This is ethically problematic because other (often poorer) countries are shouldering the burden of New Zealand’s AI use. It may also be legally problematic. As a developed country and party to the Paris Agreement, New Zealand is committed to taking a lead in addressing climate change. This means setting progressively more ambitious emissions reduction targets (known as nationally determined contributions).
Last year’s United Nations global stocktake on climate action confirmed that countries’ total efforts are insufficient to limit temperature rise to well below 2°C above pre-industrial levels. New Zealand’s contribution is also falling short.
The tension between increasing use of generative AI and meeting climate goals is one that climate change minister Simon Watts and his team will be wrestling with as they prepare for next month’s climate summit in Azerbaijan.
Lower and smarter use of technology
To press on with New Zealand’s commitment to address climate change, we need to focus on entangled solutions to deal with the growing environmental, social and governance costs of generative AI.
“Digital sobriety” is a concept that encourages reduced technology use. It is one approach to thinking about the tensions between AI use and its escalating impacts.
This is similar to our approaches to reducing water consumption and waste. It also involves asking ourselves whether we really need the latest smart device or bigger data plans.
Another potential remedy is to scale down slightly and make use of small language models instead of data-hungrier large language models. These smaller versions use less computational power and are suitable for smaller devices.
Integrating sustainability into AI guardrails would also help to balance some of the environmental impacts of generative AI. Guardrails are filters or rules that sit between inputs and outputs of the AI to reduce the likelihood of errors or bias. Currently, these safeguards are mainly focused on fairness, transparency, accountability and safety.
As the Paris Agreement acknowledges, adopting sustainable patterns of consumption plays an important role in addressing climate change. Careful thinking now about how we adopt hyperscale generative AI in New Zealand in sustainable ways could help steer the country towards a more responsible relationship with this powerful and swiftly developing technology.
Nathan Cooper, Associate Professor of Law, University of Waikato and Amanda Turnbull, Lecturer in Law, University of Waikato
This article is republished from The Conversation under a Creative Commons license. Read the original article.