Misinformation remains a vexed policy problem


Jordan Guiao
Contributor

A review of the Australian Code of Practice on Disinformation and Misinformation – Australia’s answer to defence against misinformation online is currently underway.

A year since being developed by a tech lobby group, the voluntary, penalty-free Code remains paltry and ineffective against one of the biggest, most wicked problems on the internet today.

Harmful misinformation continues to be rife, including during the recent federal election, throughout the enduring pandemic with new waves and sub-variants, and on global geo-politics.

The new Albanese Labor government has announced some clear priorities since being elected – on climate action, cost of living issues, and government integrity. But it’s unclear whether technology policy will become ones of these priorities.

This new government is yet to make its mark on technology policy. And regardless of whether you agreed with regulations set out by the previous Morrison government, technology reform was visibly on their agenda.

Jordan Guiao

There are huge areas of reform needed, and initiatives started by the previous government like the Disinformation Code continue to play out.

New Communications Minister Michelle Rowland has not committed to the Code, and has hinted at actions that are more focused on data transparency. Certainly, the current Code does little by way of tackling misinformation and the central role digital platforms play in facilitating, distributing and amplifying it.

It remains one of the more lacklustre programs born out of the Digital Platforms Inquiry, compared to examples like the globally watched News Media Bargaining Code, and wide-reaching proposals as part of the significant (albeit delayed) Privacy Act Review.

The initial traction of the Digital Platforms Inquiry, which informed the Code has seemingly lost steam, and there is a huge opportunity for Labor to once again generate momentum, not least in the area of misinformation.

There are fundamental issues with the Code’s effectiveness and governance. It continues to be opt-in, relying on the goodwill of technology platforms to self-regulate.

The actions taken by technology platforms are required to be presented through self-congratulatory ‘transparency reports’ only once a year, in an environment where misinformation occurs daily, and the ‘independent oversight’ provided by the Code is through an opaque and questionable sub-committee selected by the digital lobby group who organised the Code, with no transparency on their selection criteria or recruitment process, no public involvement or consultation.

The debate over mis-and disinformation has become a contentious policy play with politicians like Peter Dutton using it as a cover to silence critics, and former MPs like Craig Kelly and George Christensen accusing fact-checking initiatives of being censorship.

There are actions overseas which provide some welcome benchmarks. These include the European Code of Practice on Disinformation, which creates better engagement with the public by empowering them with tools to understand and report on disinformation, in stark contrast to Australia’s bureaucratic complaints mechanism.

The European Code also has a transparency centre that’s available to the public versus Australia’s questionable ‘independent’ sub-committee, and a stronger monitoring framework.

There is also the broader Digital Services Act which requires transparency from online platforms, including on the algorithms they use, more effective safeguards for users, including the ability to challenge platforms’ content moderation decisions, and access to key data for researchers.

At a minimum, the toothless Australian Code could take a leaf out of the European model with stronger safeguards and requirements from platforms.

The Australian Code needs to be mandatory, rather than voluntary, demand more frequent reporting than just once a year, and have truly independent oversight, not ones that were hired by the same lobby group which represents the technology platforms.

Failing these, it may be more effective for the government to scrap this weak initiative and simply start over.

We need stronger regulatory action and must do away with self-serving policy to properly combat the harmful misinformation that continue to plague Australians online.

Jordan Guiao is a Research Fellow at The Australia Institute’s Centre for Responsible Technology.

Do you know more? Contact James Riley via Email.

1 Comment
  1. Chris Baxter 2 years ago

    Do we need stronger regulatory action to combat misinformation online?

    Whether one starts with literally any authoritarian regime in human history, increasingly Trudeau’s Canada, 1984 or Brave New World, the apolitical citizen knows that no-one, group, corporation or Government, can be trusted to act as the arbiter of truth.

    Humans always create misinformation. Mostly, it’s harmless. It is also, practically speaking, an important, no, critical aspect of the process of testing ideas to determine the truth. One needs to have a hypothesis in order to test it. Sometimes hypothesis are best tested in public and whilst some turn out to be misinformation – others turn out to be true.

    At worst, misinformation puts other people at physical or medical risk.

    Whatever the case, assuming media remains open and institutions apolitical, misinformation can readily be met with, and for all practical purposes resolved by, debate. Free speech, including debate, is one of the Western principles that, when regarded as a whole, has functioned remarkably well for many decades – guard the baby as you flush the bathwater.

Leave a Comment

Related stories