Tech companies grilled on how they are tackling terror and violent extremism

Australia’s eSafety Commissioner has issued legal notices to Google, Meta, Twitter/X, WhatsApp, Telegram and Reddit requiring each company to report on steps they are taking to protect Australians from terrorist and violent extremist material and activity.

The spread of this material and its role in online radicalisation remains a concern both in Australia and internationally, with 2019 terrorist attacks in Christchurch NZ and Halle Germany, and more recently Buffalo NY, underscoring how social media and other online services can be exploited by violent extremists, leading to radicalisation and threats to public safety.

The online safety regulator issued the notices under transparency powers granted under the Online Safety Act, which will require the six companies to answer a series of detailed questions about how they are tackling the issue.

eSafety Commissioner Julie Inman Grant said eSafety continues to receive reports about perpetrator-produced material from terror attacks, including the 2019 terrorist attack in Christchurch, that are reshared on mainstream platforms.

“We remain concerned about how extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material,” Ms Inman Grant said.

“We are also concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to cause harm.

“Earlier this month the UN-backed Tech against Terrorism reported that it had identified users of an Islamic State forum comparing the attributes of Google’s Gemini, ChatGPT, and Microsoft’s Copilot.

“The tech companies that provide these services have a responsibility to ensure that these features and their services cannot be exploited to perpetrate such harm and that’s why we are sending these notices to get a look under the hood at what they are and are not doing.”

According to a recent OECD report, Telegram is the number one ranked mainstream platform when it comes to the prevalence of terrorist and violent extremist material, with Google’s YouTube ranked second and Twitter/X coming in third. The Meta-owned Facebook and Instagram round out the top five placing fourth and fifth respectively.

WhatsApp is ranked 8th while reports have confirmed the Buffalo shooter’s ‘manifesto’ cited Reddit as the service that played a role in his radicalisation towards violent white supremacist extremism.

“It’s no coincidence we have chosen these companies to send notices to as there is evidence that their services are exploited by terrorists and violent extremists. We want to know why this is and what they are doing to tackle the issue,” Ms Inman Grant said.

“Transparency and accountability are essential for ensuring the online industry is meeting the community’s expectations by protecting their users from these harms. Also, understanding proactive steps being taken by platforms to effectively combat TVEC is in the public and national interest.

“That’s why transparency is a key pillar of the Global Internet Forum to Counter Terrorism and the Christchurch Call, global initiatives that many of these companies are signed up to. And yet we do not know the answer to many of these basic questions.

“And, disappointingly, none of these companies have chosen to provide this information through the existing voluntary framework – developed in conjunction with industry – provided by the OECD. This shows why regulation, and mandatory notices, are needed to truly understand the true scope of challenges, and opportunities.”

As part of these notices, eSafety will also be asking Telegram and Reddit about measures they have in place to detect and remove child sexual exploitation and abuse.

The six companies will have 49 days to provide responses to the eSafety Commissioner.

/Public Release. View in full here.