OAIC investigates ‘dangerous’ face recognition app


Denham Sadler
Senior Reporter

The Australian Privacy Commissioner has launched an inquiry into a “really dangerous” facial recognition app that its founder has said is already being used by police in Australia.

Clearview AI is a secretive startup founded by Australian Hoan Ton-That which has exploded into controversy and public debate following a revealing feature in The New York Times.

The app sells itself to law enforcement, allowing police to use a photo of a suspect to find other images and information about them from across the internet. The company claims to have a database of three billion images that it has harvested from the likes of Facebook, LinkedIn and Instagram.

Angelene Falk: The OAIC is investigating a controversial face recognition software

The news report said that the app is being used by more than 600 police departments in the US, and Mr Ton-That has recently claimed that it is also being used in Australia, although no police departments have admitted to using it.

Mr Ton-That grew up in Australia and moved to the US when he was 19 years old. In an interview with the ABC, he said the technology is currently being used in Australia.

“We have a few customers in Australia who are piloting the tool, especially around child exploitation cases,” Mr Ton-That said.

“There’s a lot of crimes and cases that are being solved. We really believe that this technology can make the world a lot safer.”

The Office of the Australian Information Commissioner this week confirmed it has opened an investigation into whether the app is being used in Australia, or if the personal information of Australians has been caught up in its database.

“The OAIC has commenced making inquiries with Clearview AI about whether Australians’ personal information is being used in its database for the purposes of facial recognition and if it is being used in Australia,” an OAIC spokesperson told InnovationAus.

“Once those inquiries are completed, the OAIC will determine whether further action is required.”

It is currently unclear which, if any, Australian police departments are using Clearview AI. ABC News contacted police in all states and territories. All of them denied using the app except Victoria Police, which declined to comment on the “specifics of the technology”, and Queensland and Western Australia, which did not respond at all.

The company brands its product as “technology to help solve the hardest crimes”, and claims it is an “after-the-fact research tool” rather than a surveillance system.

The app has led to serious concerns over privacy and the invasive and unreliable nature of facial recognition technology.

Australian Privacy Foundation surveillance committee co-chair Dr Monique Mann said revelations Clearview is being used in Australia are hugely troubling.

“Facial recognition is really dangerous because it collects biometric information, which is sensitive personal information. It acts as a conduit between your physical presence and connects to other information that’s out there about you,” Dr Mann told InnovationAus.

“It enables tracking through public places and identification in public places, and we already have a widespread surveillance system that supports this in CCTV. There’s a real risk to individual privacy, and that is a right that allows for a number of other rights. The way this is now happening with the Clearview app is really frightening.”

Ton-That said the company’s technology only harvests photos that are publicly available on the internet.

“The general public does understand that things that are public do get into search engines and other places. We have a strong believe that law enforcement do the right thing with the tool and we have seen zero abuse so far,” he said.

But Dr Mann said these photos are being misused without consent.

“Three billion photos have been scraped without any individual knowledge, consent or awareness that this is happening. It’s sensitive personal information being used for purposes that it wasn’t meant for,” she said.

Australia’s laws and regulations aren’t adequate to prevent privacy invasions like this, and need to be updated, Dr Mann said.

“We certainly need a more robust regulatory framework that’s founded in a human rights approach. The key question is to regulate or to ban. That’s not resolved, we need to have a conversation about the type of society we want to live in, and the role of technology within that,” she said.

“It’s not necessarily a foretold conclusion that we must have facial recognition technology – we can decide.”

Several digital rights groups in Australia are campaigning for a bill of rights, which would include safeguards to ideally protect from the use of services like Clearview by police and other agencies.

“Increasingly people are becoming more aware of the issues that concern personal information and privacy. There’s a growing awareness in the community of these areas. It’s obviously negative that this is going on, but it’s a positive that it’s showing there are real weaknesses in our current frameworks that need to be addressed,” Dr Mann said.

The use of facial recognition technology by government agencies and law enforcement has been in the spotlight recently. The Coalition’s plan for a nation-wide biometrics and facial recognition capability was rebuffed by the powerful national security committee, with the legislation sent back to the drawing board.

A recent Human Rights Commission report called for a legal moratorium on the use of facial recognition technology in decision-making that has a legal impact for individuals until an appropriate legal framework has been implemented.

Do you know more? Contact James Riley via Email or Signal.

Leave a Comment

Your email address will not be published.

Related stories