eSafety welcomes feedback on draft industry standards to tackle online child sexual abuse and pro-terror material

eSafety Commissioner

Australia’s eSafety Commissioner has commenced public consultation on draft industry standards which will require tech companies to do more to tackle seriously harmful content, including online child sexual abuse material and pro-terror content.

The draft standards cover Designated Internet Services, including apps, websites, and file and photo storage services; and Relevant Electronic Services, covering a range of messaging services as well as online dating services and gaming.

The standards address the production, distribution and storage of “synthetic” child sexual abuse and pro-terror material, created using open-source software and generative AI.

Under Australia’s Online Safety Act, which commenced in January last year, industry associations were tasked with drafting enforceable codes covering eight sectors of the online industry. These sectors included social media services, websites, search engines, app stores, internet service providers, device manufacturers, hosting services, and services such as email, messaging, gaming and dating services.

Earlier this year, the eSafety Commissioner found that six draft codes contained appropriate community safeguards and registered those codes. The two remaining draft industry codes covering Designated Internet Services and Relevant Electronic Services failed to provide sufficient safeguards and eSafety announced it would move to standards.

The current consultation process follows more than two years of work by industry, including earlier public consultation, to develop draft codes that would meet these community safeguards.

Each draft standard encompasses a suite of obligations from proactive requirements to detect and deter unlawful content, systems and processes to deal with reports and complaints and tools and information to empower end-users to stay safe and reduce the risk of this highly harmful content surfacing and being shared online.

eSafety Commissioner Julie Inman Grant invited industry, and all interested stakeholders to participate in the consultation process.

“I encourage everyone to have their say on the draft standards because we all want these provisions to be as robust as possible ensuring online services take meaningful steps to address the risk of illegal content online and protect the community,” Ms Inman Grant said. “These world-leading codes and standards cover the worst-of-the-worst online content including child sexual abuse material and pro-terror content.

“eSafety and indeed the wider community, expect that all online services, should be taking all reasonable steps to prevent their services from being used to store, share and distribute this horrific content and that’s what these standards are intended to achieve.

“The Relevant Electronic Services and Designated Internet Services codes were drafted by industry and did not provide appropriate community safeguards, including lacking a strong commitment to identify and remove known child sexual abuse material. Known child abuse material is material that has already been identified and verified by global child sexual abuse organisations and law enforcement agencies and continues to circulate online.

“There are already widely available tools, like Microsoft’s PhotoDNA, used by over 200 organisations and most large companies, that automatically match child sexual abuse images against these databases of “known” and verified material.

“PhotoDNA is not only extremely accurate, with a false positive rate of 1 in 50 billion, but is also privacy protecting as it only matches and flags known child sexual abuse imagery.

“It’s important to emphasise this point: PhotoDNA is limited to fingerprinting images to compare with known, previously hashed, child abuse material. The technology doesn’t scan text in emails or messages, or analyse language, syntax, or meaning.

“Many large companies providing online services take similar steps in other contexts, processing webmail traffic using natural language processing techniques to filter out spam, or apply other categorisation rules.

“eSafety takes the privacy of all Australians very seriously so I want to be very clear on this – eSafety is not requiring companies to break end-to-end encryption through these standards nor do we expect companies to design systematic vulnerabilities or weaknesses into any of their end-to-end encrypted services.

“But operating an end-to-end encrypted service does not absolve companies of responsibility and cannot serve as a free pass to do nothing about these criminal acts.

“Our focus is on ensuring industry take meaningful steps to prevent the proliferation of seriously harmful content like child sexual abuse material. Many in industry, including encrypted services, are already taking such steps achieve these important outcomes.

“Meta’s end-to-end encrypted WhatsApp messaging service already scans the non-encrypted parts of its service including profile and group chat names and pictures that might indicate accounts are providing or sharing child sexual abuse material.

These and other interventions enable WhatsApp to make 1 million reports of child sexual exploitation and abuse each year. This is one example of measures companies can take.”

eSafety will closely consider all submissions received in preparing the final versions of the standards ready to be tabled in Parliament. eSafety proposes for the two standards to come into force 6 months after the industry standards are registered.

Obligations in the industry codes and standards are backed by enforcement powers, which the eSafety Commissioner will use when appropriate.

eSafety encourages all stakeholders to read the Discussion Paper and the Fact Sheets for each draft Standard before preparing a submission.

/Public Release.