Compute capacity chokepoints offer a novel regulatory framework for AI


We are bombarded daily with new artificial intelligence applications or hype about the potential of AI and its pitfalls.

It’s clear that AI offers significant, transformative potential in almost every aspect of society, from the economy and equality to security and science. It also has the capability to develop new threats, like pathogens or malware.

The stakes are high and governments around the world are grappling with how to legislate this complex technological phenomenon undergoing rapid evolution with an incredible range of possible applications and outcomes. Compute capacity offers a novel regulatory framework by integrating policy levers into algorithmic processes.

Globally, there is a complex tapestry of government efforts to understand and regulate AI. Nations are pursuing a range of activities, from encouraging voluntary adoption of ethical principles, to regulating technologies and their application in specific fields through new or existing and generic regulatory frameworks (such as competition and privacy legislation). The developing trend internationally is towards a risk-based approach to the governance of AI.

Australia, for example, currently has a discussion paper, Safe and responsible AI in Australia, out for consultation, which sets out in detail the domestic and international regulatory landscape and canvases potential components of a risk-based approach. In the United States, the White House released the ‘Blueprint for an AI Bill of Rights’ in 2022, setting out five principles for voluntary adoption, following several strategies and guidance documents.

Similarly, the UK released a 2023 white paper, ‘A pro-innovation approach to AI regulation’. The ‘EU AI Act’, currently making its way through the European Parliament, is billed as the world’s first comprehensive AI law and classifies AI according to the risk it poses to the user. These largely focus on regulation at the point of AI creation or at its use.

As I discussed recently on the Technology & Security podcast with Jason Matheny, Global chief executive of RAND Corporation, to succeed in any of the constellation of technologies that comprise AI you need, among other things, lots of data and leading-edge compute capacity – and this offers an opportunity for regulation. AI applications depend on significant compute capacity at the point of training and use.

The building blocks that support this compute capacity, semiconductors, have a very narrow supply chain — with the most advanced only made in Taiwan. However, the compute capacity that these semiconductors are part of, and where that is located, is similarly concentrated — right now in a set of companies and democratic countries that follow the rule of law.

Just as the narrow supply chain of semiconductors has been amenable to hardware export controls to reduce technology transfers for malign purposes, we explored whether software chokepoints could similarly be used as a regulatory framework.

During our discussion, Jason outlined a strategy that offers a proactive, risk-based approach to regulating commercial providers of computing capacity. “In short, if you really want to ensure that these models are ones that are used responsibly, we’re in a pretty favourable window of opportunity right now where we can have those end-user controls at the point of compute… cloud computing providers could do ‘know your customer’ screening, but also ‘know your process’ — or algorithm — screening,” RAND CEO Jason Matheny told the Technology & Security Episode 6.

One possibility, for example, would be to introduce a risk-based regulatory framework to screen certain computational processes as well as customers. Ask compute and cloud providers to undertake risk assessments of the use of their services; is this particular process that’s running on my computing infrastructure one that is creating a really good formula for a medicine, or is it one that’s likely to be used for training a cyber weapon?

This approach of compute chokepoint regulation might also be effective in relation to other emerging technologies. There are similar chokepoints in quantum supply chains. For example, superconducting approaches to quantum sensing and quantum computing often rely on Josephson junctions that are manufactured using Niobium. And there aren’t that many Niobium foundries in the world.

In biotech, DNA synthesis and DNA sequencing involve tools that have components that aren’t widely manufactured, with many leaders located in the United States, United Kingdom and Europe. They too represent potential chokepoints. Nevertheless, the semiconductor industry is unique in its level of concentration, and in many ways, is a gateway to many emerging technologies which require the advanced chips as a foundation to build on.

In developing regulatory approaches to new and emerging technologies, it is critical to understand the supply chains and chokepoints of their fundamental inputs, from critical minerals and rare earths to data access and compute capacity. The challenge in regulation is not to stifle innovation, but to ensure the integrity of systems and processes that are integrated across the economy. The aspiration, as always, is an approach to governance that allows the good uses of the technology while preventing the bad uses of the technology.

Dr Miah Hammond-Errey is the director of the Emerging Technology Program at the United States Studies Centre at the University of Sydney.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories