Alan Finkel on AI ethics and law

Denham Sadler
National Affairs Editor

Australia should be a global leader in artificial intelligence ethics and human rights, with the consequences of even just one slip-up being “immense”, according to chief scientist Dr Alan Finkel.

Addressing judges of the Federal Court last week, Dr Finkel urged the legal sector to take a leading role in developing AI ethics and standards, and in the general oversight of the emerging technology.

The dangers and risks associated with AI needed to be acknowledged and addressed now, and Australia could play a globally leading role in developing ways to combat and lessen the impact of these, he said.

people crossing city connections

“I believe that only by acknowledging and confronting this reality can we ensure that the darker aspects of AI do not tarnish both the value and the virtue of our scientific progress,” Dr Finkel said.

“As such, the development of AI provides us with an opportunity not only for intellectual growth, but for moral leadership,” he said.

“Through concerted and collective efforts we can fashion a framework that will enable Australia to be global leaders in the field of AI ethics and human rights. Showing the world how to advance the cause of scientific research while staying true to the ideals of a prudent and virtuous society.”

The federal government recently unveiled its ethical AI framework and announced that a number of large businesses would be trialling them. The “aspirational” framework is voluntary, but aims to “achieve better outcomes, reduce the risk of negative impact, practice the highest standards of ethical business and good governance”.

But the framework has been criticised for being too “fluffy” and a PR exercise, with some arguing that the government should focus its efforts on applying existing laws to the new technology.

A lack of certainty about how existing laws apply to technologies like AI is a significant danger to Australia, Dr Finkel said.

“The law is essential to preserving order in a democracy. And we cannot have order unless people are certain of the full scope of their rights and legal protections. As such, ambiguity over the principles that govern AI’s application threatens our way of life,” he said.

“The AI we want is a product of understanding and agreement and morality, based on justice and security and individual freedoms. But the risk of overreach – the possibility that we lose some of our core liberties in pursuit of progress – also becomes more pronounced.”

There are two key things that need to be done to combat the emergence of biases in AI, Dr Finkel said.

“We can guard against systemic bias by ensuring that no single AI based risk assessment tool ever captures more than a small percentage of the market. [And] we can go a step further by only adopting AI that has been methodically trained to avoid introducing biases,” he said.

Technologies like AI are here to stay, and work needed to be done to ensure they are used ethically and fairly, rather than to try to stop their spread, Dr Finkel said.

“The impact of AI in Australia is no dream of the future. It is here, now, today. Artificial intelligence has moved forward at such dizzying pace that it is pushing us towards bold new frontiers of imagination and innovation,” he said.

“Resistance to the rampant march of technology is futile, and self-defeating. We cannot turn back the tide of technology, and we must therefore define the nature and scope of its application, or else it will define us. We must always remember that the same enlightened society that advanced the cause of science has also advanced the cause of justice.

“While AI can be a powerful aid to our cause, the consequences of any single slip-up are immense.”

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories