The recent determination by the privacy watchdog on facial recognition should be a warning for Australian governments and companies using the technology, with legislated protections urgently needed, former Human Rights Commissioner Edward Santow says.
Last week the Office of the Australian Information Commissioner (OAIC) released its determination finding that Clearview AI had breached Australian privacy laws through its “indiscriminate and automated” collection of sensitive biometric information of Australians on a large scale, for profit.
Clearview offers a facial recognition app which allows users to upload a photo of an individual and have it matched with images from the company’s database of more than 3 billion images, which it has hoovered up from across the internet.
The OAIC found the company breached privacy laws in several ways, and ordered it to stop collecting the data of Australians and delete all of the data it already has. The privacy office also found that a number of Australian police forces had tried out the facial recognition technology.
The rule should be a wake-up call for Australian businesses and governments currently using facial recognition, Mr Santow, now a professor in responsible technology at UTS, said.
“Lot of police forces, companies and government agencies are all either using or considering using these sorts of face-matching services. I think the determination should give those people pause,” Mr Santow told InnovationAus.
“There are huge legal and reputational and ethical risks if you use a service that is in breach of citizens’ privacy. It’s beholden on those companies and police forces and agencies to do due diligence and make sure that they offer a service that protects privacy and human rights.”
A key issue of the Clearview case is that this facial recognition technology was being used by the police, Mr Santow said.
“That’s one of the highest stakes and highest risk areas where you could use facial recognition. For individuals we’re all left with this unsettling feeling – was I one of the people whose image was scraped from social media by a private company for the purpose of conducting a service found to be in breach of Australian privacy law?” he said.
“We simply don’t know the answer to that. We know the risks associated with decisions being wrong in that context are very high. They go to the very heart of our justice system. If someone is wrongly identified as a criminal suspect they can have all kinds of coercive action taken against them.
“The stakes couldn’t be higher and that’s why the Human Rights Commission is continuing to call for clear legislation that governs facial recognition and protects human rights.”
Mr Santow focused on technology and human rights during his time as Commissioner, and earlier this year called for a moratorium on the use of facial recognition technology in important decision making until adequate legislation was in place.
The Clearview decision reiterates the need for this legislation, he said.
“Sometimes when there is regulatory action it shows that there’s no need for further legislation. I don’t think that’s the case here. The determination only looks at one element of privacy, it didn’t look at questions about surveillance or any other human rights breach,” Mr Santow said.
While found to have breached Australian privacy law, Clearview won’t be punished beyond the order to delete all of its data on Australians. This shows a need for local laws to strike the right balance, Mr Santow said.
“What the law should always do is get the balance right between setting clear red lines with strong enforcement consequences for anyone who is causing harm – that’s a really important piece of the puzzle that seems to be lacking,” he said.
“On the other hand it should incentivise companies that want to innovate responsibly, that want to use this sort of technology positively. Our laws should incentivise companies that want to do the right thing and ward off companies that are cavalier or inclined to cause harm.”
Last week also saw social media giant Facebook announce that it would be shutting down its own facial recognition system as part of a “company-wide move” to limit its use of the technology.
The company’s VP of AI Jerome Pesenti said the decision was the result of a “need to weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules”.
Mr Santow said this is a positive move, but the use of facial recognition technology by companies less in the public spotlight is equally concerning.
“Companies like Facebook, Microsoft and Amazon to some extent have reduced their involvement in facial recognition. They have a real commercial incentive in protecting their reputation,” he said.
“The corollary to that is now what happens? Does that mean companies that don’t have as great of a concern about their reputation essentially can have free reign in this area? That might be the perverse consequence.
“These companies can cause even more damage, particularly if they’re not selling their products and services to the public but can operate in a much less prominent way by selling products and services to other businesses and government agencies.”
Do you know more? Contact James Riley via Email.