The government is wary of over-regulating new technologies such as artificial intelligence and will resist making ethics standards and codes mandatory for Australian businesses, Digital Economy minister Jane Hume says.
In an address to the Committee for Economic Development of Australia (CEDA), Senator Hume said the federal government would play an enabling role in accelerating the growth of artificial intelligence, along with setting standards in terms of ethics.
“AI, along with other digital technologies, will play an increasingly important role in our economy and society over the next decade and beyond,” Senator Hume said.
“As we continue to vault forward in this space, government has a pivotal role to play as an enabler, and as a standard setter – particularly in regards to ethics.
“The government has a significant responsibility … to ensure that AI, as an industry as well as a technology, has every chance to flourish, making sure we have the right settings, skills and expertise in place to ensure Australia is a global forerunner.”
The May budget allocated $124 million to artificial intelligence initiatives, including $50 million for a National AI Intelligence Centre within CSIRO and $34 million in grants for AI projects addressing national challenges.
The Coalition has also unveiled AI ethics principles, with eight guiding principles “designed to help achieve safer and more reliable outcomes for all Australians”.
These principles and other standards around AI are currently entirely voluntary for Australian businesses, and Senator Hume said the government will avoid making them mandatory.
“I obviously would rather have a voluntary code where industry has the input to what’s in the code. We want industry to adopt it themselves, but because we’re not dealing with one industry, that makes it much harder,” she said.
“The ethics principles are kind of universal. There’s nothing in there people would feel uncomfortable with, there’s nothing too prescription and there’s nothing in there that’s particularly onerous.”
Senator Hume said the government would look at as few new regulations on tech businesses as possible.
“Our regulations certainly have to be flexible enough to accommodate technological changes, but we want to make sure there’s nothing in regulations and legislation that prevents the advancements of technology,” Senator Hume said.
“Building new regulations for technology, unless we can see the use cases for it, it’s something we’d be reluctant to do, to over-regulate and over-prescribe.”
There are already existing laws and regulations that can govern new technologies such as AI, she said.
“There’s more we can do with the regulatory framework. There are already privacy laws, consumer laws, a Data Commissioner and a Privacy Commissioner. These guardrails already sit around the way we run our businesses,” Senator Hume said.
“AI is simply a technology that’s being imposed upon existing businesses. It’s important that while technology is being used to solve problems, the problems themselves haven’t really changed.”
Also speaking at the CEDA event, Microsoft corporate affairs director Belinda Dennett said now is the time to build a social licence around AI.
“We’re still in that window, but we don’t have long. The key to building alignment is in establishing trust, and that’s built in open dialogue,” Ms Dennett said.
“Governments should seek to lower the barrier to AI adoption and provide an enabling regulatory environment, and be leaders in the responsible use of AI.”
Do you know more? Contact James Riley via Email.
Business Owners have been dramatically and financially impacted by sudden Facebook Ad Account shutdowns (all run by Facebook’s AI and ML) without warning, explanation, or even a way of being able to contact a human at Facebook. Kate Crawford at Microsoft recently was featured in an article by the Guardian that: ‘AI is neither artificial nor intelligent’ and calling for laws to be made in Australia with regard to AI and not a code of ethics.
Facebook, being a private company, are a law unto themselves. When you join up – you abide by their terms of service. Their ‘service’ is now just all automated AI and machine-learning which quite frequently get decision making very wrong and will mistakenly shut down business owners Facebook pages with traumatic impact; which includes chest pains, emotional distress, and financial loss – to no fault of our own.
We have sent a formal appeal for Senator Hume’s help in addressing and regulating Facebook’s concerning use of unmonitored, dysfunctional AI & machine learning Ad Account and Customer Service. We want laws to be made so that Facebook are forced to provide human customer service rep within 24 hrs if a matter is escalated by a Business Owner (and not just auto-direct by their chat bots to a policy page and then auto-closes chat), would be greatly appreciated.
Try reading this article while substituting the word “technology” with “industry” or “business”. Senator Hume is opening the door to an industry that follows the strategy of move fast and break things. By the time pro-business governments are forced to regulate, the businesses and systems will be well established and the prospect of walking them back will be made to look prohibitive. Corporations are already leading departments into using off-the-shelf decision-making systems based on commercially secret algorithms. We are already locked into public-private system developments into which we have no oversight and the consequences of which won’t be grasped for another decade.
Allowing a short while to see how the industry takes up the voluntary standards is fine, but like all things, the rogue element will not comply unless the standards are mandatory, and even then detection of contraventions will be very difficult.
I advocate for all autonomous devices to be registered with a human being nominated who takes responsibility for the device’s conduct – a kind of enhanced public officer concept. It should also be mandatory for every autonomous device to have a kill switch which cannot be disabled.
Senator Jane did a BCom at University of Melbourne. Then she did a Grad Dip in Finance and Investment at the Securities Institute of Australia and another Grad Dip in Political Science at the University of Melbourne. Then she worked in banking. How could she know anything about AI ? Why is she speaking ? Why would I listen ? Why would I trust ? I think I won’t. Thanks Jane.