Data61 puts AI ethics into practice

After developing an artificial intelligence ethics framework long before the likes of ChatGPT and BARD, efforts are afoot at the data arm of Australia’s national science agency to help put the “difficult” principles into practice.

Speaking at the Leading Innovation Summit in Sydney this week, Data61 director Jon Whittle said the “massive upsurge in AI technologies” following the arrival of ChatGPT in November 2022 had brought ethics to the forefront of AI development.

He said that while generative AI technologies that had resulted from the “arms race” between OpenAI, Microsoft, Google and others held the potential to result in positives, the technology equally has “downsides that we have to be careful of ”.

“On the one hand, ChatGPT is a great technology. You can write a poem to your friend, you can get it to recommend recipes for you, or you might even be able to get it to write your press releases or write your emails,” he said.

“On the other hand, there are lots of dangers that technologies, like AI, bring with them. They work by analysing all the data that’s out there on the internet, but a lot of the data out there on the internet is biased, its discriminatory – and those things can be reflected in these AI technologies.”

Data61 director Jon Whittle presenting at the Leading Innovation Conference. Image: LinkedIn

Mr Whittle, who was discussing AI in the context of the seven global megatrends that CSIRO believes are the greatest existential threats for Australia, gave the example of “malicious code”, which ChatGPT can generate with relative ease.

“It used to be that you needed quite a bit of expertise to generate malicious code. You can now do it with ChatGPT and you can send that out to the world faster than before,” he told attendees from the likes of the Australian Taxation Office and Origin Energy.

Despite Data61 releasing an AI ethics framework back in 2019 to guide the development, adoption and use of AI systems, Mr Whittle said it was difficult to apply the “eight very high-level principles”, including those that go to issues of harm, fairness, privacy and transparency.

“The real challenge we have though is that these are high-level principles, they’re actually quite difficult to operationalise into practice, so we are doing a lot of work at Data61 to give technologists and software engineers, greater guidance, processes, tools to actually implement these,” he said.

The New South Wales government, which also introduced an AI ethics framework well before ChatGPT, is similar revisiting its work by embedding complementary information to help public servants navigate the risks posed by the technology.

Mr Whittle also discussed the implication of AI on creativity. “While it was long felt that AI could not generate new ideas on its own, the concept has been challenged with the arrival of generative AI technologies like ChatGPT,” he said.

“Until recently, it was assumed that AI was going to be good at automating boring tasks, but that it wouldn’t be very good at creativity. Creativity was felt to be the last bastion of humanity in a way.”

“But I think that hypothesis has been blown out of the water, with these new tools. Look at DALL-E, for example, which you can write text and it will generate wonderful images and you can argue about is that creative or not. And it gets philosophical.”

Earlier this week, tech leaders, including Elon Musk and Steve Wozniak, called for a moratorium of at least six months on training AI systems that are “more powerful than GPT-4”, dividing the tech world. is a media partner of the Leadership Institute.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories