Regulators urged to get proactive on generative AI


Stuart Mason
Contributor

Governments around the world can use existing laws and regulators to proactively address many of the privacy and data risks associated with new technologies such as generative artificial intelligence, according to UK Information Commissioner’s Office executive director for regulatory risk, Stephen Almond. 

The Information Commissioner’s Office (ICO) is the UK’s independent regulator for data protection and freedom of information, responsible for upholding information rights in the public interest.  

Mr Almond visited Australia as part of the International Association of Privacy Professionals’ ANZ Summit 2023 in Sydney last month, and described how the UK is leading the way on privacy and data protection-related issues. 

UK Information Commissioner’s Office executive director for regulatory risk, Stephen Almond

Much of his work now revolves around AI and specifically generative AI, which has skyrocketed in popularity in the last year and left governments and regulators scrambling to ensure the associated data and privacy risks are mitigated. 

He said the UK government has taken a different approach to many other nations and opted to use its existing regulators and laws to try to address risks from emerging technologies, rather than introduce new laws and technology-specific regulators. 

“Other jurisdictions are contemplating an AI-specific regulator, but the UK is building on the strength of the existing regulators,” Mr Almond said. 

“There’s a lot that can be done within the scope of existing regulations. AI is a general purpose technology — the risks are more context-specific. For example, AI in medicine needs different scrutiny compared to AI in a cinema. 

“The risks are context-specific and questions vary depending on whether it’s an organisation looking to develop models or an organisation deploying models.” 

Legislation and new regulations will be needed to plug gaps in laws regarding specific uses of AI, he said. 

“If we’re honest with ourselves, there will be some risks we’ve not identified and some areas that will need regulation,” Mr Almond said. 

“But that’s not one giant regime to reign over us. It’s difficult in terms of making sure that there is the right relationship between that sort of regulation and how AI is deployed in different sectors — there will be gaps to close off.” 

Regulators need to be proactive when it comes to new technologies such as generative AI, Mr Almond said. 

“We’ve been working super hard over the last few years to make sure that we aren’t one of those regulators that sits back and says, ‘you’ve got it wrong’,” he said. 

“We want to provide proactive guidance to the market about how data protection laws apply to the use of AI in general, and practical tools for professionals.” 

These services include an AI advice platform that allows companies looking to implement AI to receive a response on data protection laws within 15 working days, and a regulatory sandbox. 

While a focus is needed on proactively working with businesses, regulators will also need to have the power to step in when a business gets it wrong with technologies such as AI. 

In October, the ICO issued Snap, the owner of social media platform SnapChat, with a preliminary enforcement notice in relation to its My AI chatbot, which was based on the ChatGPT tool. 

The ICO said there was a potential failure for Snap to properly assess the privacy risks posed by the tool, and that it may have failed to adequately identify and assess the risks to its millions of users in the UK. 

“We’re making sure we are taking action where there are significant concerns about how organisations have mitigated the privacy risks, but also doing everything we can to provide and support organisations trying to adopt this technology,” Mr Almond said. 

For Australian businesses looking to implement a new AI tool or solution, Mr Almond said the first step should always be a privacy impact assessment. 

“The first step is planning and thinking through the privacy risks — that’s an invaluable step,” he said. 

“That sounds like a bureaucratic tool but it’s about helping people identify different sorts of risks that need to be closed off in different points of the AI life cycle. 

“It’s so much easier if you get it right the first time rather than the cost of retraining the model or reclassifying where the data comes from. It’s not worth it in terms of getting it wrong.” 

Regulators from around the world will also need to work together to address the risks of tech such as generative AI, Mr Almond said. Australia and the UK’s privacy offices enjoy “real, practical operational cooperation”, he said. 

“There’s a deep bond there that goes back some time,” he said. 

“That’s evident in a degree of regulatory cooperation which isn’t just nice chats and warm words, but you can see sharp end regulatory actions that the UK and Australian Information Commissioner has taken forward in relation to Clearview AI, for example. There’s real, practical operational cooperation on enforcement, as well as great work on policy.” 

This article was produced by InnovationAus.com in partnership with British Consulate General Melbourne. 

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories