Facebook signs new misinformation code


Denham Sadler
National Affairs Editor

Facebook has signed up to a voluntary Big Tech code aimed at combating online misinformation just days after blocking all legitimate news content across its platform.

Along with Google, Twitter, Microsoft, Redbubble and TikTok, Facebook has become a signatory to the new Australian Code of Practice on Disinformation and Misinformation, after the Big Tech firms were ordered to develop their own set of principles by the federal government in late 2019.

This was a recommendation from the Australian Competition and Consumer Commission’s long-running inquiry into digital platforms.

It comes just days after Facebook flicked the switch and blocked all local and international news for its Australian users in response to the federal government’s media bargaining code passing the lower house with bipartisan support.

Facebook
Misinformation: Facebook has signed up to a misinformation code days after banning news from its service

It’s unclear how Facebook will meet the number of objectives and coding principles in the code while it maintains its news ban in Australia.

The voluntary code has also already been slammed by the Centre for Responsible Technology, which labelled it “inadequate” self-regulation that risks becoming a “digital fig leaf”.

The code, managed by industry group DIGI, aims to reduce the risk of online misinformation and disinformation causing harm to Australians.

The companies signed up to the code will commit to introducing and maintaining safeguards to protect from this misinformation, including a range of “scalable measures” to reduce its spread and visibility.

“This new code of practice has seen a diverse set of digital companies collaborate with each other, government, academia and civil society to propose solutions to the incredibly complex challenges of misinformation and disinformation online,” DIGI managing director Sunita Bose said.

“People misleading others, or people being misinformed, are not new problems – but the digital era means that false information can spread faster and wider than before. In this code, we’ve worked to get the balance right with what we think people expect when communicating over the internet.

“Companies are committing to robust safeguards against harmful misinformation and disinformation that also protect privacy, freedom of expression and political communication.”

The code defines misinformation and disinformation as “digital content that is verifiably false or misleading or deceptive” and that is likely to cause harm, either to individual health, public goods or the political process.

It defines disinformation as being propagated via “inauthentic behaviour” such as through spam, bots or “bulk and aggressive behaviours”, while misinformation as being propagated by individual users.

The code does not apply to private messages or emails, and provides exclusions for satire, authorised government content and political advertising.

Signatories will also be able to choose which objectives and measures to agree to and have another three months to decide this. There will also be a facility established to address non-compliance within six months, but it will not have any enforcement powers.

Signatories will have to report annually on the measures it has undertaken as part of the code, with DIGI to release the first set of reports in May.

The code comprises seven objectives and 10 outcomes, with a list of some examples of behaviour, but no set requirements.

The only objective that the signatories must sign up to is the first, to “provide safeguards against harms that may arise from disinformation and misinformation”.

The measures involved with this may include the human reviewing of content, labelling of false content, the removal of content and suspension of accounts linked with this behaviour, and may involve the use of technology such as algorithmic review.

Another measure, one that Facebook may struggle to abide by currently, is “prioritising credible and trusted new sources that are subject to a published editorial code”. The code does however include a clause “noting that some signatories may remove or reduce the ranking of news content which violates their policies”.

Another opt-in objective is to disrupt advertising and the monetisation incentives for disinformation, through means such as brand safety and verification tools, the use of third-party verification companies, and the blocking of ads with disinformation.

The code includes an objective to improve public awareness around political advertising on the digital platforms, with greater transparency around the source of ads, and an option for companies to not target advertisements based on users’ inferred political affiliations, but no requirement not to do so.

DIGI will establish a subcommittee with representatives from the signatories and independent members which will meet every six months to review the application of the code, with a review of the code itself to take place in 12 months.

The Australian Institute’s Centre for Responsible Technology quickly slammed the code, saying it leaves too much to the discretion of the Big Tech firms in the form of self-regulation.

“[The code is an] inadequate response to the spread of misinformation that, yet again, asks the Australian public to put their faith in the Big Tech platforms to manage their own affairs,” Centre for Responsible Technology director Peter Lewis said.

“Disinformation is a very serious issue and it needs to be taken seriously. Rather than an unenforceable industry code, misinformation should be treated as a serious online harm and included in the Online Safety Act.”

Mr Lewis said he had “no confidence” in the oversight committee as it does not have enforcement powers and will only meet every six months.

“In recent days Facebook has shown it is capable of removing huge swathes of content from its site to forward its own political agenda – yet continues to claim it cannot discharge a general responsibility to manage damaging and dangerous content on its platform,” he said.

“Without a legally enforceable obligation to actively manage misinformation and disinformation, we fear this code will simply become a digital fig leaf.”

In a paper on the submissions received throughout the process, DIGI acknowledged criticisms of the self-regulatory nature of the code, but said this was a government policy.

“The self-regulatory approach to the code is supported by the need to devise a solution that can encompass the diversity of platforms and the speed with which these issues are evolving, as well as the emerging range of technologies to combat them,” the paper said.

“We also believe these submissions have not properly considered the significant concerns that can be incurred by ‘hard regulatory’ approaches to misinformation.”

The organisation said it did make changes from the draft code as a result of the feedback, including covering misinformation and disinformation, providing for protections for marginalised and vulnerable groups, requiring the agreement to the first objective, and establishing the non-compliance body.

 

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories