AFL trials AI to counter online abuse targeting players

This move comes in response to rising incidents of targeted online harassment and aims to create a safer digital environment for players by improving social media channel moderation.

Artificial Intelligence: Technology, Governance, and Policy Frameworks online course

The AFL (Australian Football League) is testing AI to spot and combat abusive social media posts targeting players. The trials explore if the AFL can better moderate its social media channels to create a safer online environment for fans and players.

This move comes in response to increasing reports of online abuse. In recent months, AFLW players have faced targeted online harassment, including racist messages. Instances of antisemitic and transphobic abuse have also been investigated. The AFL’s goal is to make use of AI and other tools to combat such behavior.

Several companies, including Canadian startup Areto Labs, are being trialed by the AFL. Areto Labs’ algorithm can detect harmful online content and automatically mute, block, and report responsible accounts. However, the responsibility ultimately lies with social media platforms and law enforcement to take action.

The eSafety Commission, partnered with the AFL, can remove abusive content, but policies and cultural changes within social media spaces are equally vital.

Why does it matter? While AI tools can help, experts caution about their limitations, including potential over-censorship. Additionally, these tools usually focus on public comments, leaving private messages unmonitored. The psychological impact of continuous abusive messages, even if not explicitly threatening, can accumulate over time. Researchers believe that cultivating better online cultures is essential to addressing the root of the issue.