X fined $386,000 by Australia for not complying with anti-child abuse probe

While not substantial in comparison to the $44 billion Musk invested in the website in October 2022, this penalty carries significant implications for the company’s reputation and is likely to impact its advertising earnings in Australia.

Electronics, Mobile Phone, Phone, Body Part, Finger, Hand, Person, Flag,

Australia has fined Elon Musk’s social media platform, X, a sum of AU$610,500 ($386,000) for failing to cooperate with a probe into anti-child abuse practices. The fine was issued due to the company’s shortcomings in disclosing information on how it handles child sex abuse content, the Australian eSafety Commission told X in a notification on October 3.

In February, eSafety sent official notifications to X/Twitter, Google, TikTok, Twitch, and Discord in accordance with Australia’s Online Safety Act 2021. The recently established transparency provisions mandate that these tech firms respond to inquiries regarding their actions to address underage material on their platforms.

After a second round of notifications, the eSafety Commission has published another report, which states that X failed to reply to questions about the duration it took to respond to reports of child abuse material on the platform and the methods it used to detect such content.

Commissioner Julie Inman Grant noted that X’s failure to comply was ‘more serious’ than that of other companies. The platform now has 28 days to either request a notice withdrawal or pay the fine.

Why does it matter?

X has been struggling to retain advertisers amid complaints that it is lax in moderating content. Since Musk’s takeover, the platform has stopped most content moderation, cut staff, and reinstated thousands of banned accounts, leading to a continuous decline in revenue as advertisers cut spending. Following Musk’s buyout, X closed its Australian office, leaving no local representative to respond to these concerns. On the other hand, Australia has recently implemented a code to prevent sharing AI-generated child sexual abuse material, opened a digital ID bill for consultation, and planned stricter misinformation legislation. By tackling these severe issues, Australia hopes to emerge as the global front-runner in content moderation and tackling dangerous material online.