Australia is stepping up its efforts to curb the spread of misinformation online with a new law that could see tech platforms fined up to 5% of their global revenue if they fail to prevent disseminating harmful content. The legislation, part of a broader crackdown on tech giants, aims to hold platforms accountable for falsehoods that threaten election integrity, public health, or critical infrastructure.
Under the proposed law, platforms must create codes of conduct detailing managing misinformation. A regulator must approve these codes, which can impose its standards and penalties if the platforms fail to comply. The government has emphasised the importance of addressing misinformation, warning of its risks to democracy and public safety. Communications Minister Michelle Rowland stressed that inaction would allow the problem to worsen, making it clear that the stakes are high for society and the economy.
The new legislation has sparked debate, with free speech advocates raising concerns about government overreach. A previous version of the bill was criticised for giving too much power to regulators to define what constitutes misinformation. However, the revised proposal includes safeguards, ensuring that professional news, artistic, and religious content are protected while limiting the regulator’s ability to remove specific posts or user accounts.
Tech companies, including Meta and X, have expressed reservations about the law. Meta, which serves a significant portion of Australia’s population, has remained tight-lipped on the legislation, while industry group DIGI has raised questions about its implementation. Meanwhile, X (formerly Twitter) has reduced its content moderation efforts, particularly following its acquisition by Elon Musk, adding another layer of complexity to the debate.
Australia’s stringent legal initiative is part of a global trend, with governments worldwide looking for ways to address the influence of tech platforms. As the country heads into an election year, leaders must ensure that foreign-controlled platforms do not undermine national sovereignty or disrupt the political landscape.
Ireland’s Data Protection Commission (DPC), the leading privacy watchdog for many US tech firms in the EU, is investigating Google’s handling of user data. The inquiry will examine whether Google sufficiently protected the personal information of the EU citizens before using it to develop its advanced AI model, Pathways Language Model 2 (PaLM 2). The investigation is part of a broader effort by the DPC, working alongside other EU regulators to ensure compliance with data protection laws, especially in developing AI technologies.
A group of Democratic senators, led by Amy Klobuchar, has called on the United States Federal Trade Commission (FTC) and the Department of Justice (DOJ) to investigate whether AI tools that summarise online content are anti-competitive. The concern is that AI-generated summaries keep users on platforms like Google and Meta, preventing traffic from reaching the original content creators, which can result in lost advertising revenue for those creators.
The senators argue that platforms profit from using third-party content to generate AI summaries, while publishers are left with fewer opportunities to monetise their work. Content creators are often forced to choose between having their work summarised by AI tools or opting out entirely from being indexed by search engines, risking significant drops in traffic.
There is also a concern that AI features can misappropriate third-party content, passing it off as new material. The senators believe that the dominance of major online platforms is creating an unfair market for advertising revenue, as these companies control how content is monetised and limit the potential for original creators to benefit.
The letter calls for regulators to examine whether these practices violate antitrust laws. The FTC and DOJ will need to determine if the behaviour constitutes exclusionary conduct or unfair competition. The push from legislators could also lead to new laws if current regulations are deemed insufficient.
Russia is ramping up its efforts to control the internet by allocating nearly 60 billion roubles ($660 million) over the next five years to upgrade its web censorship system, known as TSPU. The system, developed by state regulator Roskomnadzor, is designed to filter and block content deemed harmful or illegal by the government. The funding, part of a broader ‘Cybersecurity Infrastructure’ project, will acquire new software and hardware and expand the system’s capabilities.
The initiative is seen as part of Moscow’s broader crackdown on online freedoms, which has intensified since Russia‘s invasion of Ukraine in 2022. The government has been targeting independent media and social media platforms, blocking websites, and cracking down on using Virtual Private Networks (VPNs), which many Russians use to bypass government restrictions. Roskomnadzor has been increasingly influential in blocking access to these tools, with officials planning to enhance the system’s efficiency further.
The TSPU system was introduced under a 2019 law that requires internet service providers to install government-controlled equipment to monitor and manage web traffic. As of late 2022, over 6,000 TSPU devices had been deployed across Russian networks. The new funding will modernise this infrastructure and improve the system’s ability to detect and block VPN services, making it harder for Russians to access uncensored content.
Why does this matter?
While the Kremlin continues to position these measures as necessary for national security, critics see them as a blatant attack on free speech. Digital rights activists, including those from Roskomsvoboda, warn that while new investments in censorship technology will tighten government control, it is unlikely to eliminate access to independent information. Developers of VPNs and other circumvention tools remain determined, stating that innovation and motivation are essential in the ongoing struggle between censorship and free access.
Russia’s battle with VPNs and independent media is part of a broader campaign against what it calls Western information warfare. Despite the government’s efforts to clamp down, demand for alternative ways to access the internet remains high. Developers are working on more resilient tools, even as the state pours resources into strengthening its censorship apparatus. This tug-of-war between government control and free access to information seems set to continue, with both sides ramping up their efforts.
Tech industry leaders from some of the world’s most influential companies, including Google, Meta, Microsoft, and Adobe, are set to testify before the US Senate Intelligence Committee on 18 September. The hearing will focus on the growing threats to election security, particularly disinformation and misinformation, ahead of the closely watched 5 November election. As the nation prepares for a contentious face-off between Vice President Kamala Harris and former President Donald Trump, US officials are eager to ensure the integrity of the electoral process by addressing the risks of false online narratives.
Executives like Alphabet’s global affairs president Kent Walker, Meta’s Nick Clegg, and Microsoft’s Brad Smith are no strangers to congressional scrutiny, having testified before lawmakers in previous election-related hearings. Their appearance next week underscores the ongoing concerns about how foreign actors, such as Russia, Iran, and China, may attempt to meddle in American elections by exploiting digital platforms. These countries have repeatedly denied any interference while simultaneously accusing the US of involving itself in their political affairs, claims that Washington dismisses.
The testimony from these tech giants is expected to shed light on how their platforms prepare to handle the threats of misinformation and foreign influence leading up to the election. With the stakes as high as ever in this tight political contest, the role of technology companies in safeguarding democracy will be front and centre.
The Nepalese government has lifted the ban on TikTok after nearly ten months, following a cabinet meeting on 22 August 2024. This decision came after discussions with ByteDance representatives, who agreed to several conditions for TikTok’s operation in Nepal. These conditions include registering as a business, appointing a local contact, promoting tourism, supporting digital literacy, and moderating content in Nepali languages.
The Nepal Telecommunications Authority (NTA) has directed all Internet Service Providers (ISPs) to lift the ban, citing Section 15 of the Telecommunications Act. TikTok has three months to meet the government’s conditions and will collaborate with local authorities to ensure compliance with the new regulations.
The ban was initially imposed in November 2023 due to concerns about social harmony and inappropriate content, leading to criticism regarding freedom of expression. The recent decision to lift the ban has been positively received by TikTok, which is committed to fostering creativity and free expression among Nepali users, reflecting a balance between regulation and digital innovation.
Google is facing another antitrust battle in a Virginia court, where the US Justice Department has accused the tech giant of monopolising the online advertising industry. Prosecutors argue that Google controls the infrastructure that handles hundreds of thousands of ad sales each second, using its size and dominance to push out competitors and restrict customer choice.
The trial, which US District Judge Leonie Brinkema is hearing, focuses on claims that Google acquired rivals and manipulated market transactions to gain control over both advertisers and publishers. The government’s case highlights how Google allegedly stifled competition and locked customers into its products, tactics reminiscent of traditional monopolies.
Google’s defence, led by attorney Karen Dunn, refuted the accusations by arguing that the case is based on outdated market conditions. She noted that Google now faces significant competition from other major tech companies like Amazon and Comcast and that its tools have evolved to work alongside its rivals.
As the trial progresses, prosecutors push for Google to be forced to sell off essential parts of its ad business, including Google Ad Manager. The case is part of a broader effort by US authorities to curb the dominance of Big Tech, with other lawsuits targeting companies such as Apple, Meta, and Amazon.
Telegram founder Pavel Durov announced that the messaging platform will tighten its content moderation policies following criticism over its use for illegal activities. The decision comes after Durov was placed under formal investigation in France for crimes linked to fraud, money laundering, and sharing abusive content. In a message to his 12.2 million subscribers, Durov stressed that most users were law-abiding but acknowledged that a small percentage were tarnishing the platform’s reputation. He vowed to transform Telegram’s moderation practices from a source of criticism to one of praise.
While details on how Telegram will improve its moderation remain sparse, Durov revealed that some features frequently misused for illegal activity had already been removed. These include disabling media uploads on a standalone blogging tool and scrapping the People Nearby feature, which scammers had exploited. The platform will now focus on showcasing legitimate businesses instead. These changes follow Durov’s arrest and questioning in France, raising significant concerns within the tech industry over free speech, platform responsibility, and content policing.
Critics, including former Meta executive Katie Harbath, warned that improving moderation would not be simple. Harbath suggested that Durov, like other tech CEOs, may find himself in for a difficult task. Telegram also quietly updated its Frequently Asked Questions, removing language that previously claimed it did not monitor illegal content in private chats, signalling a potential shift in how it approaches privacy and illegal activity.
Durov also defended Telegram’s moderation efforts, stating that the platform removes millions of harmful posts and channels daily, dismissing claims that it is a haven for illegal content. He expressed surprise at the French investigation, noting that authorities could have contacted the company’s the EU representative or himself directly to address concerns.
Malaysia’s communications minister, Fahmi Fadzil, announced on Sunday that he has instructed the communications regulator not to reroute web traffic through local DNS servers, following feedback from public engagement sessions. The proposed directive, set to take effect on September 30, had raised concerns about potential online censorship and harm to Malaysia‘s digital economy.
The Malaysian Communications and Multimedia Commission defended the measure as a safeguard against malicious content, such as online gambling, phishing, and copyright violations. However, critics argued that the plan would increase censorship and pose cybersecurity risks, such as DNS poisoning.
Assemblyman Syed Ahmad Syed Abdul Rahman Alhadad labelled the directive ‘draconian,’ cautioning it could negatively impact the digital economy, which has seen significant investment from major tech companies. The government has been under growing scrutiny over its regulation of online content since Prime Minister Anwar Ibrahim came to power in 2022.
Fadzil emphasised the importance of combating online crime and vowed to continue engaging with stakeholders to find balanced solutions for a safer internet while maintaining economic stability.
Despite US authorities accusing employees of the Russian state media network RT of attempting to influence the 2024 presidential election, many of the social media posts linked to the case remain accessible. Tenet Media, the company at the centre of the allegations, still has hundreds of posts on platforms like TikTok, Instagram, and X. So far, only YouTube has acted by removing several of Tenet’s channels.
Prosecutors allege that RT employees covertly paid US commentators to post divisive content, though the commentators were unaware of RT’s involvement. The situation highlights the challenges social media platforms face when dealing with influence operations that involve real American users rather than fake or state-run accounts. That complexity has led to hesitation in taking swift action, reflecting the difficult balance between moderating content and avoiding censorship of legitimate speech.
The US Justice Department claims the scheme involved millions of dollars, but Tenet Media and the social media companies have not commented on how they plan to address the issue. Social media platforms are now deliberating on how to navigate these murky political and legal waters.