Ed Husic releases discussion papers on AI regulation


James Riley
Editorial Director

As if the discussion had not already started across Australia, the Albanese government has released two discussion papers on artificial intelligence to inform a framework to ensure “appropriate safeguards” for the community.

Industry minister Ed Husic will on Thursday unveil a Safe and Responsible AI in Australia paper outlining the regulatory and governance responses in Australia and overseas, proposing options to tighten the frameworks for governing AI in this country.

Mr Husic will also release a National Science and Technology Council paper – from the Office of the Chief Scientist – call the Rapid Response Report: Generative AI. Both papers will be available for public comment.

The documents come a day after a second open letter from AI scientists and other notable figures warning on the risks of AI was released by the Centre for AI Safety. It follows a similar letter in March.

Mr Husic said that using AI safely and responsibly is a balancing act that the whole world is grappling with.

Data

“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” he said. “But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.”

The Safe and Responsible AI in Australia document is said to have been written by the Industry department, although it its authors are not clear.

The discussion paper makes the point that while global investment in AI is increasing, adoption rates of AI in Australia remain relatively low,

“One factor influencing adoption is the low levels of public trust and confidence of Australians in AI technologies and systems,” it said.

“Building public trust and confidence in the community will involve a consideration of whether further regulatory and governance responses are required to ensure appropriate safeguards are in place.

“A starting point for considering any response is an understanding of the extent to which our existing regulatory frameworks provide these safeguards.”

The paper is a cry for help in trying to understand whether regulation in Australia will have an impact on local use of AI products imported from overseas. The paper asks whether safeguards put in place in Australia might get in front of what is happening elsewhere in the world.

“While Australia already has some safeguards in place for AI and the responses to AI are at an early stage globally, it is not alone in weighing whether further regulatory and governance mechanisms are required to mitigate emerging risks,” it says.

“Our ability to take advantage of AI supplied globally and support the growth of AI in Australia will be impacted by the extent to which Australia’s responses are consistent with responses overseas. However, the early responses of other jurisdictions vary.”

“Some countries like Singapore favour voluntary approaches to promote responsible AI governance. Others like the EU and Canada are pursuing regulatory approaches with proposed new AI laws,” the paper says.

“The US is consulting on how to ensure AI systems work as claimed, and the UK has released principles for regulators supported by system-wide coordination functions. G7 countries in May 2023 agreed to prioritise collaborations on AI governance, emphasising the importance of forward-looking, risk-based approaches to AI development and deployment.”

Ahead of the release of the two discussion papers, the Australian Information Industry Association (AIIA) has warned that regulation in isolation from industry would stifle innovation.

“That’s why we are calling for meaningful participation from both Government and industry to establish flexible guardrails as generative AI technologies evolve,” AIIA chief executive Simon Bush said.

“It is our opinion that for many existing AI use-cases in sectors such as transport and health, self-applied frameworks can be effective in managing the adoption of such technologies.

“We are seeing best-practice guardrails evolve through collaborations between academics and industry leaders. Government needs to back this work and engage industry in any potential regulatory frameworks.”

Last week, a parliamentary inquiry was launched into the risks and potential opportunities of generative artificial intelligence tools like ChatGPT in school and higher education settings, following a referral by Education minister Jason Clare.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories