Govt AI ethics ignore our laws


Denham Sadler
National Affairs Editor

The government should focus on enforcing existing laws and strengthening regulations to govern the use of artificial intelligence, rather than its “fluffy” ethics framework, according to Deakin University senior lecturer Dr Monique Mann.

Industry Minister Karen Andrews last week unveiled an AI ethics framework and announced that a number of large businesses, including NAB, Telstra and Microsoft, would trial the principles.

The framework boils down to eight principles: human, social and environmental wellbeing, human-centred values, fairness, privacy and protection and security, reliability and safety, transparency and explainability, contestability and accountability.

The “aspirational” framework is entirely voluntary but aims to “achieve better outcomes, reduce the risk of negative impact, practice the highest standards of ethical business and good governance”.

The framework is a key deliverable from the near-$30 million in funding provided in last year’s budget for artificial intelligence ethics.

AI ethics frameworks like the Australian government has just unveiled are “very in vogue”, Dr Mann said, but risk “ethics-washing” and allowing businesses to bypass actual laws.

“There’s all of this hype and rush to have these frameworks but the real questions go to the application. We already have fields of law that would apply,” Dr Mann told InnovationAus.com.

“We should be applying the laws we have before we start developing new systems that are not enforceable,” she said.

“Lots of corporations are developing these frameworks that are fundamentally doing very unethical things. It seems like these AI ethics principles are like a propaganda arm to advance unethical business practices but seem like they’re doing it ethically with this ethics-washing approach.”

The near-$30 million in funding should have gone towards better enforcement of existing laws around privacy, data protection and consumer protections in terms of AI, and better funding of bodies like the Office of the Australian Information Commissioner, Dr Mann said.

“I think it’s positive we’re thinking about these things and having the conversations but they need to be backed with the enforceable laws that we already have. We have privacy laws, and we need to have robustly funded and strong regulators to be able to implement and enforce laws we already have,” she said.

“It’s a way to avoid hard law and regulation by having these statements. We should be focusing on the enforceability of our laws and regulations. AI is a buzzword, and an ethical approach to it is also a buzzword. There’s a lot of hype but not much substance.

“As the technology develops and advances, there’s a real need to be having these conversations and doing it in a way that’s strategic and enforceable, otherwise it’s fluffy and what’s the point?”

The government released a discussion paper on the framework earlier this year which was a “dog’s breakfast” that showed a “fundamental misunderstanding of Australian privacy law”, Dr Mann said.

Salinger Privacy director Anna Johnston, along with a number of co-authors including Dr Mann, wrote a submission to government outlining these issues. But instead of clarifying these errors, the government removed all references to privacy law, Ms Jonhston said.

“The government has wasted a lot of energy and taxpayers’ money to first come up with some draft guidance which was so inaccurate in its statements about privacy law that they had to junk it, and now in its place has released principles which are so motherhood in nature that they add no practical value,” Ms Johnston told InnovationAus.com.

“It’s like being tasked with clearing a minefield to make it safe for everyone to use, but instead just putting up a sign saying ‘please tread carefully’. That entire process has been dispiriting.”

The authors of the submission were not consulted as part of the government’s roundtables in the lead up to unveiling the official framework.

“I question the hegemonic approach taken by the Australian government that neglects to consider other perspectives, especially around Indigenous data sovereignty and non-Western issues,” Dr Mann said.

“What voices are heard and what voices are silenced is also an important consideration in developing these principles.”

Clarification: Only a small portion of the near-$30 million in funding allocated to Artificial Intelligence in the 2018 budget went towards the AI ethics framework guidelines. The majority of the funding went towards cooperative research centre grants.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories