AFP breach privacy rules in using facial recognition app: privacy watchdog


Denham Sadler
National Affairs Editor

The Australian Federal Police breached privacy rules in its use of a controversial facial recognition tool by not properly assessing the risks of using the technology, the national privacy watchdog has found.

The Office of the Australian Information Commissioner (OAIC) released its determination on the AFP’s use of Clearview AI, finding that it failed to comply with its privacy obligations by not completing a privacy impact assessment, and didn’t have systems in place to track its use of the technology.

Clearview AI, based in the US, offers a facial recognition app which allows users to upload a photo of an individual and have it matched with images in the company’s database of at least 3 billion images hoovered from around the internet. If a match is found, the tool then provides a link to where the matching images were found online.

Facial recognition: AFP cops a blast from the Privacy Commissioner

Last month the OAIC found that Clearview AI had breached Australian privacy rules through the “indiscriminate and automated” collection of sensitive biometric information of Australians on a “large scale, for profit”.

From 2 November 2019 to 22 January 2020, members of the AFP’s Australian Centre to Counter Child Exploitation (ACCCE) used a free trial of Clearview AI’s app to upload facial images of persons of interest and victims in active cases.

The AFP did not complete a privacy impact assessment of using the tool before doing so, despite being required to do so for all high privacy risk projects, the OAIC found.

This is in breach of the Australian Government Agencies Privacy Code.

“The AFP did not assess the risks to providing personal information to a third party located overseas, assess its security practises, accuracy or safeguards,” the OAIC said in the determination.

The privacy watchdog also found that the AFP failed to take “reasonable steps to implement practises, procedures and systems in relation to its use of Clearview AI”.

There were also gaps in the AFP’s systems to identify, track and accurately record the trial of the facial recognition tool, and in internal systems for identifying the collection and use of personal information and in its mandatory privacy training practices.

“I recognise that facial recognition and other high privacy impact technologies may provide public benefit where they are accompanied by appropriate safeguards,” Australian Information Commissioner Angelene Falk said.

“But there were a number of red flags about this third party offering that should have prompted a careful privacy assessment. By uploading information about persons of interest and victims, the ACCCE were handling personal information in a way that would have serious consequences for individuals whose information was collected.”

The OAIC has directed the AFP to engage an independent assessor to review and report back on residual deficiencies in its practices, procedures, systems and training in relation to privacy assessments and make any changes necessary. It has also been ordered to ensure relevant AFP personnel have completed an updated privacy training program.

While the OAIC’s investigation focused on the AFP, police agencies in Victoria, Queensland and South Australia also used free trials of the Clearview technology.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories