Draft online safety codes rejected by the eSafety Commissioner earlier this month have been publicly released by Australia’s tech sector after concerns over the lack of transparency in the industry-led process.
The draft codes were published by the industry associations representing companies that provide platform services, including global giants like Google and Meta, on Thursday. The most recent “feedback” from the Commissioner has also been published.
The release coincides with the second round of legal notices from eSafety ordering Twitter, TikTok, Google, Amazon’s Twitch and Discord to detail the measures they are taking to tackle child exploitation material.
eSafey Commissioner Julie Inman Grant last week said that social media companies were doing “shockingly little” detect, block and prevent online child sexual abuse material, with Twitter’s recent decision to axe all Australian staff “frustrating” efforts further.
The draft codes, which are the result of 17 month’s of work, have been developed to deal with the online treatment of Class 1 and Class 2 material – internet content which would either be refused classification, such as child sexual abuse and pro-terror material, or considered R18+.
They cover eight different sections of the online industry: social media services (SMS), relevant electronic services (RES), designated internet services (DIS), internet search engine services (SES), app distribution services, hosting services, internet carriage services (ISP) and equipment.
According to the draft codes, none of the eight codes submitted for review met the requirements of the Act as they do “not provide appropriate community safeguards for matters of substantial relevance to the community”.
Concerns include reporting timeframes (all eight codes), and the systems and processes used to detect and remove class 1A material (all but the ISP code) and class 1B material (RES, DIS, hosting services and equipment codes).
Other issues include proposed measures to limit the hosting of class 1A material and class 1B material in Australia (SMS, DIS, hosting services and equipment codes) and annual reporting (ISP and equipment codes).
The equipment code was the code that had the most issues identified, with eight matters raised by the eSafety Commissioner, followed by the DIS code with five matters identified.
Another major concern raised with all eight codes bar the hosting services code is that they refer to end-users as “‘Australian end-users’ rather than ‘end-user in Australia”, which the online regulator argues is “materially different”.
“eSafety considers that ‘end-users in Australia’… and ‘Australian end-users’… are materially different concepts, despite the likely overlap. This is because the former reflects an end user’s geographical location, while the latter… reflects the ordinary resident status of the end-user,” Ms Inman Grant said in a letter to the associations.
“eSafety considers it unlikely that the draft [codes] would satisfy s 140(1)(b) of the Act because the code[s] is expressed to apply in respect of ‘Australian end-users’ and not to the relevant group of providers… or to the relevant online activity.”
Ms Inman Grant sent the codes back to industry this month on the grounds that they were “unlikely to provide the appropriate community safeguards” to be registered, as per the Online Safety Act, and gave the industry associations until the second week of March to address residual concerns.
“I have written to the industry associations and encouraged them to resubmit draft industry codes with improved protections and to provide them with a final opportunity to address areas of concern,” she said earlier this month.
A spokesperson for the six associations on Thursday said that “industry is in the process of reviewing and addressing the recent feedback from the eSafety Commission” ahead of the March deadline, but had requested a short extension to allow for further public consultation.
“We will continue to closely collaborate with the Office of the eSafety Commissioner to finalise the codes and reach the best possible outcome for the community,” the spokesperson said in a statement.
The industry associations involved in the drafting are the Communications Alliance, Interactive Games and Entertainment Association, Software Alliance, Consumer Electronics Suppliers Association and Australian Mobile Telecommunications Association, and DIGI.
If the eSafety Commissioner determines that the industry codes are not suitable to be registered, the regulator will seek to develop its own industry standards in consultation with the public and other stakeholders.
Greens Senator David Shoebridge has been pushing for the codes to be released publicly, telling a Senate Estimates hearing last week that the “public has a right to know what’s being proposed to protect us against extreme violence and online hate”.
“There’s an embarrassing irony in the industry’s refusal to release the codes, because the lack of transparency from big tech is a major part of the problem these codes need to address,” he said last week.
The eSafety Commissioner on Thursday also ordered Twitter, TikTok, Google, Amazon’s Twitch and Discord to detail the measures they are taking to tackle child exploitation material or face civil penalties of up to $700,000-a-day on the table if the companies fail to respond within 35 days.
The mechanism is one of three means available under the government’s Basic Online Safety Expectations (BOSE) to help ‘lift the hood’ on the online safety initiatives being pursed by social media, messaging, and gaming service providers.
“Back in November, Twitter boss Elon Musk tweeted that addressing child exploitation was ‘Priority #1’, but we have not seen detail on how Twitter is delivering on that commitment,” Ms Inman Grant said on Thursday.
“We’ve also seen extensive job cuts to key trust and safety personnel across the company – the very people whose job it is to protect children – and we want to know how Twitter will tackle this problem going forward.
“These powers are designed to shine a light on these platforms and lift online safety by compelling them to give some straight answers to some straight questions.”
The first round of notices sent to Apple, Meta, Microsoft, Snap and online chat website owner Omegle in August found “many were not taking simple steps to protect children and are failing to use widely available technology… to detect and remove this material”.
Do you know more? Contact James Riley via Email.