Ethical AI will be a competitive advantage: Santow


Tony Kirkby
Contributor

Ed Santow has led the Australian Human Rights Commission (AHRC) since August 2016, the independent statutory body established to protect and promote all aspects of human rights.

Mr Santow is one of seven AHRC commissioners, each focused on a different aspect of human rights and his current focus – and what he believes will be the biggest project during his term as commissioner – is an examination of the nexus between human rights and technology.

He talks candidly about his views on this complex and rapidly evolving field on this episode of the Commercial Disco podcast, with InnovationAus director James Riley and outlines progress within the AHRC’s Human Rights and Technology project.

Mr Santow says new technologies like artificial intelligence are often seen as a threat to human rights but can in fact advance human rights by making communities more inclusive. But the reality is that in his role he recognises the threats as they are, and says it is his “melancholy duty as human rights commissioner,” to focus on these very real risks.

“We’ve understood fairly well that a right to privacy is something that is engaged by artificial intelligence in particular,” he says.

“What we’ve really sought to explore in the project to date is how other rights like equality, non-discrimination, the right to a fair trial, even, can be threatened if we don’t make really mindful decisions now.”

The project aims to examine not only how these rights can be protected but how this can be done without inhibiting innovation. “We want to make sure we are able to innovate, to solve the problems and take the advantage of the opportunities we see around us, really smartly,” Santow says.

“That’s the piece to the puzzle where I think we can do better: innovate in a way that is consistent with our values, that will be better, smarter, and give our citizens what they want.”

Ed Santow
Edward Santow: A renewed focus on the complexities of human rights and technology

He sees a potential competitive advantage for Australia in being able to develop AI innovations consistent with human rights values and avoiding unintended consequences.

“It will be to our competitive advantage if we can show to consumers overseas, that a piece of AI or new technology developed here has human rights protections baked in.”

When considering the potential of AI to undermine human rights, Santow draws parallels with cane toads, introduced into Australia to combat beetles that were destroying sugar cane crops. The toads proved to be a highly effective solution, but one with devastating unintended consequences.

For any area of innovation, regulation is almost invariably retrospective: it seeks to address the undesirable consequences of that innovation. The issue of human rights and AI is likely to be no different, but Santow suggests there is a solution.

He says we need to articulate very clearly, what our overarching values are, establish ‘red lines’ that must not be crossed and make sure anyone seeking to engage in innovation, including the government, is respectful of those boundaries.

“I think that could fuel a very positive form of creativity, because some clear legal boundaries around what you can and can’t do can actually spur effective innovation.”

When he embarked on the human rights and technology project Santow anticipated it would end up recommending much new regulation to protect human rights from AI. Now, he says his view has changed

“We’ve got a regulatory infrastructure, but all too often it is being ignored. So it’s not about building a new one, it’s about making sure the existing one that has served us well over generations is properly applied and enforced.”

The commission put out a discussion paper in late 2019 setting out its views on this idea and has followed that up with consultations. Santow says these identified some gaps in legislation the need to be filled, but saw a much bigger task to better apply existing laws through a process that brings in the community, industry, academics and innovators.

Community awareness of AI and its potential human rights impacts has been on the rise, he says.

“Populations around the world are waking up to what’s at stake. Today people understand you could have your personal information used against you, that it could lead to you being discriminated against on the basis of something that you can’t control, like your race, or your age, or your sexual orientation, or your sex or gender.”

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories