Govt mulls foreign AI fast lane for regulatory approvals


Avatar photo

Joseph Brookes
Administrator

The federal government is considering allowing AI technologies that have already satisfied tough regulatory requirements overseas to be deployed with fewer checks in the Australian market.

In an effort to encourage consistency with increasing global regulation of AI, Australia’s new “guardrails” for high-risk applications could include recognition of compliance with foreign schemes, like the European Union’s upcoming AI Act.

The European scheme, while not yet in force, will impose new safety standards on high-risk AI technologies like requirements to test and assess systems before deployment and report, explain and offer recourse to impacted persons afterwards.

data
AI fast lane: high risk AI applications that have satisfied foreign regulatory requirements could have an easier path to the Australian market.

The Industry department is leading the Albanese government’s development of Australia’s own high-risk AI rules with the assistance of expert advisors and industry groups. Consistency and interoperability with more advanced approaches in the US, Canada and Europe is a key consideration.

On Wednesday, general manager of the department’s AI governance branch Lucas Rutherford said Australia is very conscious of the EU AI Act and its upcoming rules.

“Should Australia go down the process of creating mandatory guardrails in this space, one of the considerations will be… should we recognise, effectively, if someone or an organisation or a product has gone through a similar level of due diligence [like] testing, accountability — whatever it is — should that be recognised in Australia,” he said.

“And that would be consistent with product safety requirements in other contexts as well.”

Mr Rutherford said this is not locked in but under consideration. He said the EU AI Act is the “most prescriptive and comprehensive model” but how effective it is remains to be seen.

He was providing evidence to a parliamentary committee on generative AI in education and explained interoperability will be a sharp focus of Australia’s upcoming regulatory scheme.

“There needs to be a level of interoperability between the standards imposed or required in the Australian continent and those applied in other jurisdictions,” he told the committee.

“The EU does get a lot of attention because it is probably the most legally prescriptive legislation out there. But there are certainly other developments underway.”

Canada is also in the process of developing its first ever regulatory framework specific to AI through the proposed Artificial Intelligence and Data Act. It will also impose new testing, record keeping and accountability requirements but is less strict than the European approach and has been criticised for overlooking rights.

An executive order signed by US president Joe Biden last year would require companies to share test results for their high risk artificial intelligence systems with the US government before they are released.

A new standard on best-practice process for evaluating whether AI use and development is being done by an organisation responsibly was published late last year, after warnings from Australian experts most local businesses weren’t ready for it.

Mr Rutherford said all these developments are being considered in Australia’s upcoming approach and some consensus is forming.

“There seems to be an emerging international consensus around the types of pre-deployment best practice organisations should adopt. But there is still a level of evolution with respect to recognising that the technology itself is continuing to evolve.”

Do you know more? Contact James Riley via Email.

Leave a Comment