Ellen Broad on the future of artificial intelligence regulation


Denham Sadler
National Affairs Editor

It’s time to ditch the catch-all term “artificial intelligence” and move away from efforts to simplify and “neaten” regulation in the space, ANU School of Cybernetics Associate Professor Ellen Broad says.

In a policy paper included in The Innovation Papers, launched in Canberra on Thursday, Professor Broad said we need to move away from using artificial intelligence as a term to bind together a huge range of tech offerings and solutions, and instead develop a “more precise vocabulary to describe technical systems”.

“If we are actually going to realise the promise of more robust, reliable and safe AI systems, then first we need to relinquish our desire to contain many different kinds of systems under the term AI in pursuit of making things simple,” Professor Broad said in the policy paper.

“We could entertain years of objections about how low-risk activities could get swept up in efforts to regulate a few bad actors in high-risk settings, because we are reluctant to let go of AI as a discrete industry, its limits to be defined by an elite few. These debates would only serve those who benefit from inaction.

“Or we could get specific in our intentions and purposeful in how we use language to give life to those intentions. These issues are not simple. They deserve attention. The world is complex – and getting more so.”

Ellen Broad, Associate Director for Education, School of Cybernetics, ANU

Australia should take the lead of the US-based Center on Privacy & Technology, Professor Broad said, which earlier this year announced it would no longer be using the terms “artificial intelligence”, “AI” or “machine learning”.

“The terms…placehold everywhere for the scrupulous descriptions that would make the technologies they refer to transparent for the average person,” the Center said in March.

“Rather it is something to which we have been compelled in large part through the marketing campaigns, and market control, of tech companies selling computer products whose novelty lies not in any kind of scientific discovery, but in the application of turbocharged processing power to the massive datasets that a yawning governance vacuum has allowed corporations to generate and / or extract. This is the kind of technology now sold under the umbrella term ‘artificial intelligence’.”

In her piece, Professor Broad said it is common to hear complaints about the current “patchwork” nature of AI-related regulation in Australia and around the world.

“While inconsistencies and contradictions have long been part of law-making challenges, these complaints speak to a desire to neaten things up, to gather together fraying threads,” she said.

“But when we look at the backdrop of other waves of technological innovation this patchwork effect not only makes sense, but is the sign of a deepening and more distributed landscape of expertise.

“Trying to pave over this only delays its growth. We’re part way on a journey that will occupy our lives, our children’s lives and the lives of their offspring. Considered this way, the increasing complexity of AI regulation is not to be fixed but to be faced head-on, requiring new professions and new expertise to cultivate on.”

It’s not possible to find a solution to regulate AI that is perfect or neat, she said.

“The perfect solution won’t be a simple rulebook, covering our entire society – it will be complex, reflecting the society it seeks to shape. It will be iterative, diverse and embrace complexity. And it will aim to build a robust, distributed and multilayered system regulation and expertise,” Professor Broad said.

“When it comes to ‘regulating AI’, we shouldn’t let the false perfection of simplicity and unity be the enemy of the good: the messy, ongoing and partial reality of progress.”

In her policy paper, Professor Broad called on the federal government to expand existing grant schemes for new and novel technologies, with funding specifically for research and applications that promote safety, standardisation and verifiability of AI systems.

She also recommended the government take the best bits of technology assessment agency models, such as the US Office of Tech Assessments, and adapt them for Australia, and expand education programs which are embedded in real-world contexts.

Do you know more? Contact James Riley via Email.

1 Comment
  1. Jo Cooper 4 months ago
    Reply

    Agreed Ellen, language about certain categories needs to be deconstructed as sweeping terms do not reveal the interwoven layers, process and policy required for most to ascertain a sufficient understanding of Artifical Intelligence, Privacy and Security technologies. Having work on distributed human centric data tech for years now, it is also important that Government reinforce Australia’s position on emerging tech, because the lack of “leaning in” is continuing to impact our innovation foothold.

Leave a Comment

Your email address will not be published.

Related stories