Facial recognition company Clearview AI breached Australian privacy rules through its “indiscriminate and automated” collection of the sensitive biometric information of Australians on a “large scale, for profit”, the privacy watchdog has found nearly two years after it started inquiries.
The Office of the Australian Information Commissioner (OAIC) released its determination on Clearview AI on Wednesday, ordering the controversial facial recognition company to stop collecting any information on Australians and to delete all of the images it has already hoovered up.
The privacy watchdog also confirmed that a number of Australian police forces had utilised Clearview’s app and fed the system images of themselves, suspects and victims to test it, with this practice currently subject of a separate investigation that the OAIC is still finalising.
But the OAIC is unable to directly issue a fine to Clearview under its existing powers, and has not opted to apply to the courts for a fine. Under legislation unveiled by the government last week, the OAIC would be able to access more significant civil penalties, also through the court.
Despite this plan being first announced nearly three years ago, these powers are not yet in place and were unavailable to the OAIC as part of this determination.
The OAIC is also pushing for the power to be able to issue civil penalty orders for privacy breaches that don’t reach the “serious” or “repeated” threshold.
Clearview offers a facial recognition app which allows users to upload a photo of an individual and have it matched with images in the company’s database of at least 3 billion photos. These photos are automatically scrapped from social media platforms and a range of other online sources.
The company mostly sells itself to law enforcement agencies, but its patent also includes plans for it to be used by individuals.
It was revealed that from November 2019 to January 2020, members of the AFP registered for a free trial with Clearview and conducted a “limited pilot of the system in order to ascertain its suitability”.
Police agencies in Victoria, Queensland and South Australia also used the facial recognition service as part of a free trial.
The OAIC started looking into Clearview in January 2020, and launched a formal investigation in March. In July last year it announced that this investigation would be conducted in partnership with its UK counterpart.
In the determination, the OAIC found that Clearview had breached Australian privacy laws by collecting sensitive information without consent and by unfair means, and by not taking reasonable steps to notify the individuals, ensure the information was accurate or to implement practices, procedures and systems to ensure compliance with the Australian Privacy Principles.
The company has been ordered to stop collecting facial images and biometric templates from individuals in Australia and to destroy the existing ones in its database within 90 days.
In the ruling, the privacy watchdog rejected many of Clearview’s key arguments against the case, including that it is not subject to Australian laws, that it is a small business, and it does not collect personal information.
“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair. It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database,” Australian Information Commissioner Angelene Falk said.
“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification. These practices fall well short of Australians’ expectations for the protection of their personal information.”
The collection of the sensitive information of countless Australians was not necessary, legitimate or proportionate, Ms Falk said.
“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” she said.
“The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.
“Clearview AI’s activities in Australia involve the automated and repetition collection of sensitive biometric information from Australians on a large scale, for profit. These transactions are fundamental to their commercial enterprise.”
Following the launch of the OAIC’s inquiry, Clearview said it stopped marketing in the country and has a policy of refusing all requests for accounts from Australia. But the privacy office found the company has not taken any steps to stop collecting the information of Australians in its database, and providing matches of Australians to its users.
“During my investigations the respondent provided no evidence that it is taking steps to cease its large scale collection of Australians’ sensitive biometric information, or its disclosure of Australians’ matched images to its registered users for profit. These ongoing breaches of the Australian Privacy Principles carry substantial risk of harm to individuals,” the determination said.
In response to the inquiry, Clearview claimed that it was not subject to the Australian Privacy Act, and that it had annual turnover of less than $3 million, making it not subject to the privacy laws.
But the OAIC said the company provided “no evidence” to support the turnover claim, and that it is subject to the Privacy Act due to its collection of the personal information of Australians and its work with Australian authorities.
The OAIC also found that the images of individuals collected by Clearview do constitute personal information, and that simply uploading them to Facebook, for example, does not amount to consent for them to be used in the way Clearview did.
“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” the determination said.
“In fact, this expectation is actively discouraged by many social media companies’ public-facing policies, which generally prohibit third parties from scraping their users’ data.”
The OAIC found that Clearview offered its services as part of free trials to numerous police services around Australia. These agencies tested the functionality of the facial recognition service, including by uploading the photos of victims, such as children.
The OAIC is currently finalising an investigation into the AFP’s use of Clearview and whether this complied with its requirements under the Australian Government Agencies Privacy Code.
The determination comes nearly two years after the OAIC launched inquiries into Clearview, and after several other privacy regulators have come to their own determinations. The Canadian privacy commissioner’s own inquiry last year led to Clearview leaving the country.
Privacy campaigners have also filed legal complaints against Clearview across Europe, arguing the company breached EU privacy law with its “dishonest” and “extremely intrusive” app.
In July this year the OAIC also found that Uber had breached the privacy of 1.2 million Australians by failing to protect them from a cyber-attack in 2016. This came after a three and a half year investigation, with the long length due to “jurisdictional issues”.
Last month the OAIC ruled that 7-Eleven had violated customers’ privacy by secretly collecting their facial images in 700 stores, following a seven month investigation.
The OAIC is currently not able to issues fines to companies for breaching the privacy of Australians, but can apply to the courts for an order if there is a “serious or repeated interference with privacy”.
As part of the ongoing review into the Privacy Act, the OAIC is pushing for greater enforcement powers which would allow it to seek fines without the need for the “serious” or “repeated” threshold.
Do you know more? Contact James Riley via Email.