AI ethics gets complicated

James Riley
Editorial Director

Conversations around ethics may have always been complex but it’s about to get a lot more complicated when artificial intelligence (AI) gets thrown into the mix.

The nation’s chief data scientist Dr Alan Finkel recently floated the idea of introducing a set of ethical standards for artificial intelligence to be imposed on companies to protect consumers and the broader economy from nefarious practices.

He explained the scheme would involve handing out a so-called “trustmark” to companies that meet those agreed standards and it could work much in the same way as the Fairtrade logo is given to coffee producers. But is it really that simple?

Paul Shetler: AI ethics shouldn’t be left to developers 

Paul Shetler, former chief at the Digital Transformation Agency and co-founder of Hypereal, argues it’s not.

“People who spend their entire lives studying ethics can’t agree what’s necessary, what’s ethical, so why would we assume people who are working in AI, who don’t have a strong background in ethics or moral philosophy, can come up with a stamp of what’s ethical AI or not,” he told

“I’d rather understand how the AI made the decision it made, rather than just be told that it’s ethical by some government body because that wouldn’t really impress me.”

Mr Shetler went on to say that what’s considered ethical by one person, may not be considered ethical to another, and those varying opinions will change according to time and place.

“The problem is we don’t have a fundamental agreement across the board because there are lots of disagreements about what’s ethical and unethical, and a lot of those cases are political, and because they’re political they haven’t been resolved, so it doesn’t really get us anywhere,” he said.

But talking about establishing an ethical AI standard is not completely new, according to Richard Kimber, chief executive and co-founder of Daisee, who says it has been front of mind for a number of global organisations.

It’s just the awareness in Australia is a little lower.

“From a global point of view it has definitely been a big topic,” he said.

“Certainly, all the internet companies and a number of organisations have collectively got together to work on a set of principles about the ethics of AI and how to make sure the research around AI is being used positively for society.”

Although that’s not to say that Australia should “slavishly follow the global principles”, Mr Kimber said.

“I think we have to be careful not to turn out a sequence of complex legislations. It should be developed as an industry-led cooperative where we get different stakeholders to contribute – so academics, business people and government. There’s quite a range of implication on people, not just one group.”

The conversation comes just as the federal government announced it will allocate $30 million to the CSIRO and Data61 to support the development of the nation’s AI and machine learning capability, including a technology roadmap, a standards framework and a national AI ethics framework.

“First, it’s great to see AI getting a mention but it’s a toe in the water amount. I think we really need to focus on a much larger investment,” Mr Kimber said.

“This is a very fundamental change in the way computing works, and there are people talking about AI powering the next industrial revolution.”

“In comparison to how China and our other Asian neighbours are approaching AI, they’re putting hundreds of millions and billions into AI, not $30 million. There’s probably a misunderstanding of how important it is.”

“It’s probably the biggest issue of how a generation – in terms of education and jobs – needs to change, and how competition at a global level will change.

“This is a global issue and given we are a little behind we have to play catch-up, so we should over invest, not under invest.”

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories