HRC calls for an AI Safety Commissioner


Denham Sadler
Senior Reporter

The federal government should establish an AI Safety Commissioner and halt the use of facial recognition and algorithms in important decision-making until adequate protections are in place,  the Australian Human Rights Commission has concluded after a three-year investigation.

The Australian Human Rights Commission’s (AHRC) report on Human Rights and Technology was tabled in Parliament on Thursday afternoon, with 38 recommendations to the government on ensuring human rights are upheld in the laws, policies, funding and education on artificial intelligence.

Human Rights Commissioner Ed Santow has urged local, state, territory and federal governments to put on hold the use of facial recognition and AI in decision-making that has a significant impact on individuals.

Ed Santow
Australian Human Rights Commissioner Ed Santow

This moratorium should be until adequate legislation is in place that regulates the use of these technologies and ensures human rights are protected.

The use of automation and algorithms in government decision-making should also be paused until a range of protections and transparency measures are in place, Mr Santow said in the report.

“New technology should give us what we want and need, not what we fear,” Mr Santow said.

“Our country has always embraced innovation, but over the course of our Human Rights and Technology project, Australians have consistently told us that new technology needs to be fair and accountable.

“That’s why we are recommending a moratorium on some high-risk uses of facial recognition technology, and on the use of ‘black box’ or opaque AI in decision-making by corporations and by government.”

This moratorium would not be for all uses of these technologies, the report said, but for the use of it to make decisions “that affect legal or similarly significant rights”.

It should apply until a range of new pieces of legislation are introduced, including to make it a legal obligation that individuals are notified when AI is materially used in making an administrative decision, and that a decision cannot be made using AI if the reasons or technical explanation for it cannot be produced.

The government should also require that human rights impact assessments are undertaken before any department or agency uses an AI-informed decision-making system to make administrative decisions, the AHRC concluded.

These assessments will look at whether the use of the technology complies with international human rights obligations, if it is automating any discretionary elements of decisions, the level of review by humans, and if it is authorised and governed by legislation.

The AHRC urged the federal government to appoint an AI Safety Commissioner as a new independent statutory office to “champion the public interest”.

“Australians should be told when AI is used in decisions that affect them. The best way to rebuild public trust in the use of AI by government and corporations is by ensuring transparency, accountability, and independent oversight, and a new AI Safety Commissioner could play a valuable role in this process,” Mr Santow said.

The position, which could be incorporated in an existing body or a new standalone entity, would provide expert guidance to government agencies and the private sector on complying with the law and ethical standards.

The Commissioner would also work collaboratively to build the capacity of regulators, and monitor trends on the use of AI in Australia.

They must be independent from the government in its structure, operations, and legislative mandate, and have a particular focus on the impact of the use of new technologies on vulnerable and marginalised people.

The report also recommended an audit be conducted on all current and proposed AI-informed decision-making by or on behalf of a government agency, which could be completed by the new AI Safety Commissioner.

A multi-disciplinary taskforce on AI-informed decision-making should also be established and chaired by the Commissioner, Mr Santow said.

Do you know more? Contact James Riley via Email.

Leave a Comment

Your email address will not be published.

Related stories