Time to take look under virtual hood at how algorithms might be harming our kids

Australia’s eSafety Commissioner has called for greater transparency and robust risk management from social media platforms and online services around the dangers their recommender algorithms pose to users, particularly children.

In its latest Tech Trends position paper, eSafety examines how these algorithms expose children to heightened risks, such as online sexual exploitation through friend or follower suggestions connecting them with adults, dangerous viral challenges, and harmful content loops that may contribute to negative mental health impacts.

Systems designed to maximise engagement may also try to draw people in through shocking and extreme content, which could normalise prejudice and hate by amplifying content and views that are misogynistic, homophobic, racist and extreme.

“Recommender algorithms can have positive benefits, like introducing us to new entertainment, experiences, friendships and ideas, based on factors such as our previous searches and online preferences,” eSafety Commissioner Julie Inman Grant said.

“But we also need to consider the risks, particularly to the most vulnerable among us – our children.”

“eSafety’s Mind the Gap research showed almost two-thirds of young people aged 14 to 17 were exposed to seriously harmful content relating to drug taking, suicide, self-harm, or violence. One in ten children have been the target of hate speech online, and one in four have been in online contact with an adult they didn’t know.

“Greater transparency about the data inputs, what a particular algorithm has been designed to achieve, and the outcomes for users are all critical to helping both the public and online safety regulators understanding how these algorithms affect what we see and do online.

“We also need to make tech companies accountable for the impacts, particularly on children.

“If a child is particularly vulnerable in the real world, being served up more and more content relating to self-harm and suicide, dangerous challenges, or body image and eating disorders could not only have negative mental health impacts, but also potentially place them in real physical danger.”

The position paper recommends companies take a more proactive Safety By Design approach to recommender algorithms by considering the risks they may pose at the outset and designing in appropriate guardrails. This could include:

  • features that allow users to curate how a recommender system applies to them individually and opt out of receiving certain content
  • enforcing content policies to reduce the pool of harmful content on a platform, which reduces its potential amplification
  • labelling content as potentially harmful or hazardous
  • introducing human “circuit breakers” to review fast-moving content before it goes viral.

The paper also recommends enhancing transparency through regular algorithmic audits and impact assessments, curating recommendations so they are age appropriate, and introducing prompts to encourage users to reconsider posting harmful content.

“When Twitter started nudging people to remind them to actually read an article before retweeting it, it resulted in 40 per cent more people reading the full article, not just an inflammatory headline. This shows small changes can potentially make a big difference to online experiences and the online public commons,” Ms Inman Grant said.

“And as we hurtle headlong into the Web 3.0 world and the metaverse, immersive technologies and haptics could collect ever more sensitive biometric data, amplifying harmful or extreme content and causing people to experience resulting harms in a much more visceral way.”

eSafety’s mandatory reporting powers under the Online Safety Act’s Basic Online Safety Expectations are designed to make sure companies are transparent about which of these steps they are taking. Global regulatory efforts to tackle these issues are also beginning to emerge, such as the new European Union’s Digital Services Act which aims to increase the accountability, transparency and public oversight of how online platforms shape content on their services.

The impact of algorithms and enhancing transparency of digital platform activities is also a focus for the new Digital Platform Regulators Forum, which brings together eSafety with the Australian Competition and Consumer Commission, the Australian Communications and Media Authority and the Office of the Australian Information Commissioner.

Beyond eSafety’s role in preventing and remediating online harms, the position paper touches upon some of the risks and issues associated with recommender systems which fall under these agencies’ remits, such as exclusionary conduct, mis- or disinformation and privacy concerns.

The position paper on recommender systems and algorithms can be found here.

/Public Release. View in full here.