Big Tech firms release misinformation reports as regulation looms


Denham Sadler
Senior Reporter

A number of Big Tech firms have released their latest reports under Australia’s voluntary misinformation code as they face looming further regulation on the issue.

The Australian Code of Practice on Disinformation and Misinformation was developed by large global tech firms following the Australian competition watchdog’s digital platforms inquiry in 2019.

The voluntary code requires signatories to opt-in to a number of commitments surrounding misinformation and disinformation, such as public reporting on their efforts to combat this and having proper reporting mechanisms in place.

The first reports under the code were released in May last year, with the 2021 iterations unveiled on Monday.

It comes just weeks after the previous Coalition government committed to hand significant enforcement powers to the media watchdog to enforce this code and force tech firms to hand over data. The legislation had been promised to be introduced in the second half of this year, although it is unclear whether the new Labor government will also commit to it.

Eight companies have signed up to the code so far, including Facebook parent company Meta, Google, Microsoft, TikTok and Redbubble, which is the only Australian company to be a signatory.

The newly released reports have been reviewed by independent expert Hal Crawford, who also developed best practice guidelines to guide the companies.

“The code drives greater transparency and public accountability around tech efforts to address harmful misinformation, and DIGI looks forward to working with the incoming government and others in our efforts to maximise its effectiveness,” DIGI managing director Sunita Bose said.

“The 2021 transparency reports provide new data on misinformation in Australia, and the many interventions to remove and flag fake claims and accounts, elevate reputable content and partner with researchers.”

In its report, Meta committed to provide more information on its content ranking algorithms, but also pointed to research claiming that these algorithms are not contributing to social polarisation as has been claimed.

“Some commentators have expressed concern that social media spreads misinformation, and creates echo chambers and polarisation. Academic research on the role of algorithms and political and social polarisation has mixed results, with many studies suggesting that social media is not the primary driver of polarisation,” Meta’s report said.

“Nonetheless, Meta aims to provide greater transparency and control for users around how algorithms rank and distribute content. To this end, we have included an additional commitment to provide transparency of the work we do here.”

Meta has released content distribution guidelines, recommendation guidelines and a widely viewed content report in the US.

In 2021, 11 million pieces of content were removed from Facebook and Instagram around the world for violating the company’s standards in relation to health misinformation, with about 180,000 pieces of this content coming from Australia.

This is a sharp increase from the 110,000 pieces of content removed in Australia in 2020.

Meta’s report and efforts to talk up its actions in addressing misinformation on Facebook come soon after whistleblowers alleged that the company deliberately “over-blocked” critical health pages as part of its news ban in early 2021.

Google’s report revealed that the tech giant removed more than 90,000 YouTube videos in Australia which violated its community guidelines, and more than 5000 videos were uploaded from Australia with dangerous or misleading Covid-19 information.

There was a big increase in the number of Australian medical misinformation videos removed from TikTok across 2021, with just 24 taken down in January and more than 4000 removed in September.

“The growth in medical misinformation removals trended alongside factors directly related to Covid-19, including the arrival of the Delta strain, government-initiated measures to manage infections including lockdowns and travel restrictions, as well as the parallel rollout of the vaccination program,” the TikTok report said.

The misinformation code has been branded “inadequate” self-regulation that risks becoming a “digital fig leaf” by the Centre for Responsible Technology.

In March this year the then-Coalition government announced plans to hand ACMA new powers to enforce this voluntary social media code, nearly three years after this recommendation was first made by the competition watchdog.

The plan would have given ACMA information-gathering powers to use on social media companies around their efforts to combat misinformation and disinformation, and handed “reserve powers” to register and enforce existing codes if the voluntary code is deemed to be inadequate.

ACMA provided a report to government mid-last year saying the current voluntary code was “limited by its definitions”.

The threshold in the code for action to be required by the social media giants is that the piece of misinformation and disinformation poses a “serious” and “imminent” harm.

“The effect of this is that signatories could comply with the code without having to take any action on the type of information which can, over time, contribute to a range of chronic harms, such as reductions in community cohesion and a lessening of trust in public institutions,” the ACMA report said.

ACMA also recommended that the code by opt-out rather than opt-in, and that private messaging should be included under its remit.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories