The government is wary of over-regulating new technologies such as artificial intelligence and will resist making ethics standards and codes mandatory for Australian businesses, Digital Economy minister Jane Hume says.
In an address to the Committee for Economic Development of Australia (CEDA), Senator Hume said the federal government would play an enabling role in accelerating the growth of artificial intelligence, along with setting standards in terms of ethics.
“AI, along with other digital technologies, will play an increasingly important role in our economy and society over the next decade and beyond,” Senator Hume said.
“As we continue to vault forward in this space, government has a pivotal role to play as an enabler, and as a standard setter – particularly in regards to ethics.
“The government has a significant responsibility … to ensure that AI, as an industry as well as a technology, has every chance to flourish, making sure we have the right settings, skills and expertise in place to ensure Australia is a global forerunner.”
The May budget allocated $124 million to artificial intelligence initiatives, including $50 million for a National AI Intelligence Centre within CSIRO and $34 million in grants for AI projects addressing national challenges.
The Coalition has also unveiled AI ethics principles, with eight guiding principles “designed to help achieve safer and more reliable outcomes for all Australians”.
These principles and other standards around AI are currently entirely voluntary for Australian businesses, and Senator Hume said the government will avoid making them mandatory.
“I obviously would rather have a voluntary code where industry has the input to what’s in the code. We want industry to adopt it themselves, but because we’re not dealing with one industry, that makes it much harder,” she said.
“The ethics principles are kind of universal. There’s nothing in there people would feel uncomfortable with, there’s nothing too prescription and there’s nothing in there that’s particularly onerous.”
Senator Hume said the government would look at as few new regulations on tech businesses as possible.
“Our regulations certainly have to be flexible enough to accommodate technological changes, but we want to make sure there’s nothing in regulations and legislation that prevents the advancements of technology,” Senator Hume said.
“Building new regulations for technology, unless we can see the use cases for it, it’s something we’d be reluctant to do, to over-regulate and over-prescribe.”
There are already existing laws and regulations that can govern new technologies such as AI, she said.
“There’s more we can do with the regulatory framework. There are already privacy laws, consumer laws, a Data Commissioner and a Privacy Commissioner. These guardrails already sit around the way we run our businesses,” Senator Hume said.
“AI is simply a technology that’s being imposed upon existing businesses. It’s important that while technology is being used to solve problems, the problems themselves haven’t really changed.”
Also speaking at the CEDA event, Microsoft corporate affairs director Belinda Dennett said now is the time to build a social licence around AI.
“We’re still in that window, but we don’t have long. The key to building alignment is in establishing trust, and that’s built in open dialogue,” Ms Dennett said.
“Governments should seek to lower the barrier to AI adoption and provide an enabling regulatory environment, and be leaders in the responsible use of AI.”
Do you know more? Contact James Riley via Email.