The Australian Privacy Commissioner has officially launched an investigation into controversial facial recognition tool Clearview AI.
Clearview AI, based in the US and founded by an Australian, offers a facial recognition app allowing users to upload a photo of an individual and have it matched against the company’s database with a reported 3 billion images, hoovered up from around the internet, such as from Facebook and LinkedIn. The service then provides a link to where that photo was found on the internet.
The company sells its app as a “research tool used by law enforcement agencies to identify perpetrators and victims of crime”.
The app is popular among law enforcement agencies around the world, and has been used by the Australian Federal Police. It was thrust into the public spotlight following an investigation by the New York Times at the start of this year, with widespread claims that it was illegal and unethical.
The Office of the Australian Information Commissioner has been conducting preliminary investigations into Clearview AI since late January, and has now determined that further action is needed.
The investigation, to be conducted with the OAIC’s UK counterpart, will focus on the controversial company’s use of “scraped” data from around the internet and the biometrics of individuals.
“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalised data environment,” the OAIC said in a statement.
It comes after Clearview AI said it had ceased operations in Canada following a similar privacy probe by the Canadian privacy office. It had been in use by over 30 police clients in Canada.
Clearview AI has been used on multiple occasions by Australian police, despite the Australian Federal Police previously flat-out denying this.
But it was revealed earlier this year that between 2 November 2019 and 22 January 2020, members of the AFP-led Australian Centre to Counter Child Exploitation registered for a free trial of Clearview AI and conducted a “limited pilot of the system in order to ascertain its suitability”.
BuzzFeed News revealed recently that 2200 law enforcement agencies from around the world had used Clearview AI at least once.
Freedom of Information Act documents recently released showing correspondence between Clearview AI representatives and Victoria Police officers.
The Clearview AI representatives claimed the service is like a “Google search for faces”, with police able to “instantly get results from mugshots, social media and other publicly available sources”.
“Our technology combines the most accurate facial identification software worldwide with the single biggest proprietary database of facial images to help you find the suspects you’re looking for,” an email from Clearview AI said.
The use by the Victoria Police officers was just to test the system, with open-source images uploaded to trial it, the force said.
A Victoria Police spokesperson said Clearview was not used for any investigations, and its use has since been discontinued.
“The technology was deemed unsuitable and there is no ongoing operational use of this platform,” the Victoria Police spokesperson said.
“Victoria Police uploaded a small number of publicly available stock images to Clearview AI to test the technology. No images linked to any investigation by Victoria Police were uploaded as part of this testing process.”
There has been widespread global backlash against the use of facial recognition technology by law enforcement in recent months, with a number of tech giants temporarily pausing sales to law enforcement.
Despite these concerns, the Australia government is still planning to create a national biometrics database allowing authorities and law enforcement to conduct facial recognition matches, with legislation expected to be re-introduced to Parliament soon.