How governments can foster trust in the age of AI


Jennifer Mulveny
Contributor

The government’s interim response to the safe and responsible AI in Australia consultation earlier this year envisions a risk-based approach to regulating AI that would strike a constructive balance between encouraging its adoption to spur economic growth and ensuring its safety for everyone.  

This is welcome news for Australia. As Minister for Industry and Science Ed Husic has said, the investment and uplift of artificial intelligence (AI) simply will not be realised if it is not trusted. 

As a first step, the Australian government has started to consult with industry on the voluntary labelling of AI-generated content. This is a critical step to help address a significant societal issue – the dissemination of deepfakes which can cause serious harm in mere seconds. This ranges from fake content created during election campaigns, to synthetically generated images of disasters, to child sexual abuse material (CSAM) created using AI.  

Building trust in AI requires transparency  

While misinformation is not a new concept, generative AI has significantly reduced the time it takes to create convincing deepfakes. The danger here is that seeing is believing. While many of us are trained to be more sceptical of the written word especially when we don’t know the source material, we tend to trust what we see and hear as being “true”. An MIT study showed that we can process an image in just 13 milliseconds. That’s not much time for reflection, and in some cases, the damage has already been done. 

Even if a piece of online content is later revealed to be fake, it can still do serious harm. The power of social media means misinformation can be shared to millions of people in an instant. One of the consequences of this is the erosion of trust in digital content, as people have no way to identify what is trustworthy. This erosion of trust may hinder many fundamental workings of a functioning government.

Adobe’s Asia Pacific director for government relations and public policy, Jennifer Mulveny

As it becomes increasingly difficult to discern what is true, labelling digital content with provenance details would at least provide consumers with transparency and important context so that they can decide whether to trust it or not.

One industry leading approach to this is Content Credentials, an open-source technology that lets everyone see exactly where a piece of digital content came from, who created it, and what edits were made to it along the way, including whether it was created or edited with AI. This approach is backed by more than 2,500 members of the Content Authenticity Initiative and is built upon open standard tools by the Coalition for Content Provenance and Authenticity (C2PA) 

The private sector is leading the way 

Google recently announced it has joined the C2PA steering committee – alongside existing members such as BBC, Intel, Microsoft, Sony and others – to further develop the C2PA’s technical standard for digital content provenance. This was a watershed moment in helping to drive mainstream awareness and adoption of Content Credentials. Google is also actively exploring how to incorporate Content Credentials into its own products and services.  

As more organisations adopt Content Credentials into their tools and platforms, this solution will have a powerful impact on restoring trust in digital content.

For example, the BBC has introduced content credentials to confirm where an image or video has come from and how its authenticity has been verified. Meta is developing tools that builds upon C2PA’s open standards to enable labelling of AI-generated images.

Other organisations have started to implement Content Credentials including Leica, Microsoft, Nikon, OpenAI, Qualcomm and Sony. At Adobe, we have integrated Content Credentials across Creative Cloud. We expect to see more platforms, tools, and devices alike join our efforts in 2024.  

Government and industry must collaborate

Responsible AI innovation isn’t a one-time challenge to solve, it’s an ongoing investment and will require close collaboration among governments, industry partners and experts.  

In the United States, the Biden Administration convened Adobe and other leading technology companies to make a series of commitments to develop safe AI as the pace of innovation continues to accelerate. A subsequent Executive Order on AI builds upon these voluntary commitments and directs agencies to label the official content they create with information including whether AI was used to make it, to enhance transparency and affirm authenticity. 

These are important steps forward, especially with the upcoming elections happening around the world this year. In 2024, over four billion people – more than half of the world’s population across more than 40 countries – will go to the polls.

Adobe and other leading technology companies pledged to help prevent deceptive AI content from interfering with this year’s global elections as part of the AI Elections Accord at the Munich Security Conference in February.  

The role of government: adoption and education 

We also believe that Content Credentials can strengthen public confidence in the integrity of official government content and enhance trust with constituents.

Governments should consider legislation requiring campaigns to integrate Content Credentials and digital provenance into their online campaign communications. This would afford voters a level of transparency about what they are consuming and be better equipped to make an informed decision to trust the material or not.  

Alongside the adoption of Content Credentials, the Australian government needs play a critical role in assessing, supporting and promoting a new type of media literacy around digital content.

First, we need to educate people that they can’t trust everything they see and hear digitally. Then, we need to inform them that there are tools available today that they can use to verify for themselves whether or not to trust the content. Then, people will expect that important news stories or factual events will come with provenance before they believe it. And when people see important content without it, they will know to be sceptical.

That is the state we must achieve to restore trust in the digital ecosystem, and governments should be taking this as an immediate step to protect election integrity. 

Together, state and federal governments need to work with companies to build, safe, secure and trustworthy AI. Global standards and frameworks like C2PA need to be applied to both public and private organisations to ensure a consistent approach across sectors to support transparency and citizen understanding.  

The Safe and Responsible AI in Australia consultation response by the Australian Government is a step in the right direction towards a collaborative, risk-based and proportionate approach to AI governance to build safe, responsible and trustworthy systems that safeguard the public against the threat of misinformation.

Jennifer Mulveny is Adobe’s Asia Pacific director for government relations and public policy

This article was produced by Adobe Asia Pacific in partnership with InnovationAus.com.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories