‘Risk allocation’ left out of Australia’s AI debate

Avatar photo

Joseph Brookes

Australia’s artificial intelligence discourse has been captured by “majoritarian thinking” that accepts a social cost for claims of massive economic value, former privacy watchdog Malcolm Crompton warned on Monday while trying to reframe the current debate.

“You always hear the words risk management, but nobody asks the prior question: risk allocation,” Mr Crompton, who was Australia’s Privacy Commissioner from 1999 to 2004 said at the NSW government’s Future of AI summit in Sydney.

Since the explosion of generative artificial intelligence use in the last 12 months, Australia’s policy and law makers have scrambled to keep up, with no shortage of stakeholder input.

The federal government is finalising its interim response to calls for dedicated regulations that received more than 500 submissions, while New South Wales has had to redraft its nation-leading governance framework to account for generative AI and the technology’s integration with other products.

While relatively harmless in untrained hands, artificial intelligence creates very real risk that was put in the hands of almost anyone with an internet connection in the last 12 months, adding to the urgency of this new approach, the former Privacy Commissioner said.

“It was like giving a very high powered machine gun to everybody on the planet and then saying ‘don’t shoot’.

“So of course, it’s going to be a very different world from the way AI has been deployed in the past. Arguably ChatGPT is nothing better than a used car salesman — extremely good, engaging, persuasive language to tell your crap, unless you understand what you’re doing. So one size doesn’t fit all [as a response],” Mr Crompton said.

Throughout the debates, business and tech groups have championed AI as a productivity driving tool that will add to economic growth if allowed to be used with light touch regulation. The Tech Council of Australia has claimed AI could add $115 billion to the Australian economy by 2030 under a fast adoption model.

Rights groups have called for new regulation to address the inherent risks of the technology or for better application of current frameworks like privacy and consumer law.

Senior ministers around the country have framed their pending regulatory responses as ones that will strike a balance between capturing the value of AI and managing the risks it brings like bias, exclusion and worker displacement.

“Too often we’re in the communitarian or majoritarian thinking: look at the macroeconomic gain measured in billions of dollars,” Mr Crompton said. “There’s going to be small group of people who lose out. Bad luck. [But] that’s not the way to think.

“How a democracy manages for its minorities — whether they’re gender minorities, or whether they are racial or ethnic or other language minorities, or disability, or anything else — It’s how you handle minorities that tells you whether you’ve got a healthy democracy or not. So think about risk allocation.”

According to Mr Crompton, who is now founder and partner of IIS Partners and remains a noted privacy expert, risk management is important, but AI brings inevitable consequences and more thought needs to be given to remediation.

“What’s your plan for failure? Everybody can tell you their plan for success… what’s the user experience when it’s not going right?”

Grappling with this and understanding there will never be a “one size fits all” approach to the technology will let governments and business do more with the technology, he said.

“If the likelihood of failure is reduced, and the impact of a failure is reduced, you can increase your risk appetite for failure… why not reallocate [budgets] towards the failure side of it, so that you can understand the failure more quickly, manage it better, reduce for those who are affected and increase the risk appetite — actually a virtuous circle.”

Australia has historically not been good at “managing or failure”, contributing to its citizens being among the most sceptical nation in the world on AI, according to surveys. But work in New South Wales, where the government mandates the use of an AI Assurance Framework by public servants whenever the technology is deployed, is showing how to turn this around, Mr Crompton said.

“What it’s doing is allowing for minimal amounts of failure on a smaller groups of people, where there’s basically an ambulance around the corner. So you can fix things up quickly for the smaller groups that are affected before you scale out. It’s a very good way of dealing with this.”

Mr Crompton said the NSW AI Assurance Framework could be improved by adding more monitoring requirements for when the technology is deployed.

“Because if you’re monitoring for it intensely and then you’ve got the feedback loops in place, you’re managing for failure and you’re learning more quickly, and you’re giving assurance to the people in Australia that you can continue to do better.”

Do you know more? Contact James Riley via Email.

Leave a Comment