Existing laws must be better enforced and regulatory gaps filled in order to ensure that new technologies like artificial intelligence don’t infringe on human rights, according to Human Rights Commissioner Edward Santow.
The Australian Human Rights Commission launched a project investigating the intersection of human rights and technology 18 months ago, and has now unveiled a discussion paper which sets out a “template for change” for how Australia develops and uses emerging technologies.
The paper sets out a series of significant policy proposals, including a national strategy governing new technology, an AI Safety Commissioner, a number of new pieces of legislation and a moratorium on the use of potentially harmful facial recognition technology.
Addressing these issues now is “deeply important”, Mr Santow said.
“The possible uses for AI are literally almost limitless, and we’re seeing it being used in everything from smart washing machines to complex decision-making systems that apply to policing and the financial services sector,” Mr Santow told InnovationAus.
“What is especially important to us is that there are some forms of decision-making that use AI where the risk of harm is particularly great. Unless we have a really robust system of safeguards in place then we are concerned that people will indeed be harmed, possibly irreparably.”
The key takeaway from the discussion paper is that the foundational principles of democracy – accountability and the rule of law – need to be applied to AI and other new technologies and this needs to be done through enforcing existing laws and creating new ones, not just through ethical frameworks.
It sets out three guiding principles for AI-informed decision-making: AI use should comply with human rights law, it should be used in ways that minimise harm and it needs to be accountable.
The most important action the government could take to ensure human rights are better protected would be to better enforce existing laws and regulations surrounding artificial intelligence, Mr Santow said.
“We thought there would be a very strong case for a huge amount of new legislation but instead what we found is that we already have a variety of really strong laws in place designed to protect our human rights,” he said.
“The primary problem is that those laws are not always being vigorously and effectively enforced when it comes to development and the use of AI.”
To do this, the Commission has recommended the launch of a National Strategy on New and Emerging Technologies to ensure the opportunities of these technologies are achieved and the “very real threats” are avoided.
This would guide a multifaceted regulatory approach that includes law, co-regulation and self-regulation. It would need more effective application of existing laws, reform to some laws, co-regulation and better education and training on these issues.
“Right at the centre of that strategy should be promoting effective regulation and making sure that laws are properly enforced,” Mr Santow said.
An AI Safety Commissioner should also be established as an independent statutory body to play a primary role in developing, coordinating and building capacity among current regulators, monitoring the use of AI and determine issues of immediate concern.
“Most usefully they would build the capacity of the existing regulatory ecosystem. It’s within our current areas of responsibility to enforce those laws but also to help citizens, consumers and industry to understand how those laws apply and to make it easier for those key players to make all the positive opportunities but not cause harm,” Mr Santow said.
The HRC also called for a number of “targeted reforms” and new laws, including legislation requiring that individuals be told if they are subject of an AI-informed decision-making process, the introduction of a statutory cause of action for serious invasion of privacy and laws requiring individuals be provided with technical and non-technical explanations of an AI-made decision.
The HRC project is one of several in Australia currently looking at AI ethics, with the Coalition recently unveiling its own AI ethical framework for the private sector.
The “proliferation of overlapping ethical frameworks” is problematic and can “frustrate attempts to achieve industry-led compliance”, the discussion paper said. An independent body should be commissioned to assess the effectiveness of these frameworks and consolidate them.
Ethical frameworks serve a useful purpose, Mr Santow said, but should be treated as the third tier below laws and regulatory bodies.
“What we’ve often seen with new technologies in the last decade or so is it’s almost as if the conventional pyramid has been turned on its head. They start with the idea of an ethical framework and may never get to the idea of whether the law applies,” he said.
“When you’re talking about the sorts of risks we are most focused on, like racial discrimination, to describe those as ethical issues is really dangerous. With ethical questions you have a choice, but racial discrimination is not a choice at all, everyone must comply.
“The focus should really be on what the things are we absolutely have to comply with, and then where the law is appropriately silent that’s where ethical frameworks can be incredibly useful.”
The government should also engage the Australian Law Reform Commission to conduct an inquiry into the accountability of AI-informed decision-making, the AHRC said, and identify the use of AI decision-making by government and a comprehensive cost-benefit analysis of this.
The Commission has also called for a legal moratorium on the use of facial recognition technology in decision-making that has a legal effect for individuals until an appropriate legal framework has been put in place.
This follows the powerful national security committee recently rejected the government’s facial recognition plans outright earlier this year.
The Commission’s inquiry also placed a significant focus on ensuring new technologies are accessible for all Australians.
“What we’ve seen with new technologies again and again is that they can simultaneously make our world more inclusive and also have the opposite effect. What we’re really focused on with that part is things like human rights by design,” Mr Santow said.
“There’s some really cool work being done that is actually both more inclusive in terms of the products and services that are created, but also really smart business-wise. We want to put a bit of wind in the sails of some of that excellent research because we think it can benefit everyone.”