AI will turbocharge disinformation while lax regulation is the norm


Jordan Guiao
Contributor

United Nations chief António Guterres recently called out mis- and disinformation as a “grave global harm”, while launching a key report on the issue. The report is part of a series of UN policy priorities and calls out mis- and disinformation as dangerous and deadly, with hate speech on digital platforms being linked to violence and even genocide. The UN chief urged the world not to let the current buzz of Artificial Intelligence “distract” from these existing online dangers.

What Guterres didn’t mention was aside from potentially distracting from the issues of mis- and disinformation, AI will likely turbocharge it.

Deep fakes, voice cloning, personalised conspiracy bots are all tools now available to the mis- and disinformation arsenal. Disinformation will no longer be restricted to dank memes and lofi assets as AI enable sophisticated videos, high quality doctored images and dynamic content.

Already it can be hard to discern AI generated content, with fake images of the Pope in a puffer jacket, Trump being arrested and Biden giving a speech attacking trans people going viral.

Jordan Guiao

AI will magnify the speed, scale and production of bot farms, propaganda networks and trolls as the sophisticated tools allow misleading and inaccurate content to be produced with ease and shared widely.

And yet we continue to allow Big Tech and their representatives to advocate for a self-regulating landscape that has time and again proven to be inadequate. The Australian Code of Practice on Disinformation and Misinformation celebrated its achievements with the release of its third annual ‘transparency reports’ from its Big Tech members. It’s clear how selective and self-serving this framework is.

Those same Big Tech companies who have promised to tackle the issue have also walked back many of their policies designed to combat mis- and disinformation around election periods, ahead of the significant 2024 US Federal Election.

Many safety and ethics teams from Big Tech companies were also made redundant as part of the recent tech layoffs.

The chaos android Elon Musk actively pulled Twitter out of the EU Disinformation Code, demonstrating the flaw in allowing a voluntary Code of Practice. Since taking over Twitter, Musk has encouraged mis- and disinformation in the platform, even directly engaging with and amplifying neo-Nazi voices.

As the Australian Voice to Parliament referendum comes later this year, safety regulators and experts are rightly worried about the online harms, hate speech and disinformation that is sure to accompany the campaign. And while social media platforms have promised more stringent policies over this period, we are once again left to their good graces on what they will ultimately choose to act on.

So it does not inspire much confidence that this weak regulatory regime, subject to about-facing policies, the deprioritisation of safety and ethics teams, and the whims of its CEOs will be robust enough to meet the coming AI tidal wave.

AI will turbocharge disinformation, enabled by the lax regulatory frameworks we have allowed to come to pass. And we will all look back in disbelief at how we thought a handful of ‘transparency’ reports were enough to fix the issue.

Jordan Guiao is Research Fellow at The Australia Institute’s Centre for Responsible Technology and author of ‘Disconnect: Why we get pushed to extremes online and how to stop it.’

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories