Australian Prime Minister Anthony Albanese announced a groundbreaking proposal on Thursday to implement a social media ban for children under 16. The proposed legislation would require social media platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms that fail to comply would face substantial fines, while users or their parents would not face penalties for violating the law. Albanese emphasised that this initiative aims to protect children from the harmful effects of social media, stressing that parents and families could count on the government’s support.
The bill would not allow exemptions for children whose parents consent to their use of social media, and it would not ‘grandfather’ existing users who are underage. Social media platforms such as Instagram, TikTok, Facebook, X, and YouTube would be directly affected by the legislation. Minister for Communications, Michelle Rowland, mentioned that these platforms had been consulted on how the law could be practically enforced, but no exemptions would be granted.
While some experts have voiced concerns about the blanket nature of the proposed ban, suggesting that it might not be the most effective solution, social media companies, including Meta (the parent company of Facebook and Instagram), have expressed support for age verification and parental consent tools. Last month, over 140 international experts signed an open letter urging the government to reconsider the approach. This debate echoes similar discussions in the US, where there have been efforts to restrict children’s access to social media for mental health reasons.
A Moscow court has fined Apple 3.6 million roubles ($36,889) for refusing to remove two podcasts that were reportedly aimed at destabilising Russia’s political landscape, according to the RIA news agency. The court’s decision is part of a larger pattern of the Russian government targeting foreign technology companies for not complying with content removal requests. This action is seen as part of the Kremlin’s broader strategy to exert control over the digital space and reduce the influence of Western tech giants.
Since Russia’s invasion of Ukraine in 2022, the government has intensified its crackdown on foreign tech companies, accusing them of spreading content that undermines Russian authority and sovereignty. The Kremlin has already imposed similar fines on companies like Google and Meta, demanding the removal of content deemed harmful to national security or political stability. Critics argue that these moves are part of an orchestrated effort to suppress dissenting voices and maintain control over information, particularly in the face of growing international scrutiny.
Apple, like other Western companies, has faced mounting pressure to comply with Russia’s increasingly stringent regulations. While the company has largely resisted political content restrictions in other regions, the fine highlights the challenges it faces in operating within Russia’s tightly controlled media environment. Apple has not yet publicly commented on the ruling, but the decision reflects the growing risks for tech firms doing business in Russia as the country tightens its grip on digital platforms.
Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.
Meanwhile, in Mauritius, the government imposed a similar blackout until after the 10 November election, following a wiretapping scandal involving leaked conversations of high-profile figures. Furthermore, digital rights groups such as Access Now and the #KeepItOn coalition have condemned these actions, arguing that they violate international human rights standards, including the African Commission on Human and Peoples’ Rights (ACHPR) resolution 580 and the International Covenant on Civil and Political Rights (ICCPR), as well as national constitutions.
In response, digital rights advocates are calling on telecommunications providers, including Emtel and Mauritius Telecom, to resist government orders to enforce the shutdowns. By maintaining internet connectivity, these companies could help preserve citizens’ access to information and uphold democratic principles in politically sensitive times.
Additionally, rights organisations argue that internet service providers have a unique role in supporting transparency and accountability, which are vital to democratic societies.
The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage. Google blocked more than a thousand Russian channels and over 5.5 million videos, including state-funded media, after halting ad services in Russia following its invasion of Ukraine in 2022.
Russia’s legal actions against Google, initiated by 17 Russian TV channels, have led to compound fines based on the company’s revenue in Russia, accumulating to a staggering figure reportedly in the “undecillions,” according to Russian media. Kremlin spokesperson Dmitry Peskov described this enormous number as symbolic but urged Google to take these legal pressures seriously and reconsider its restrictions.
In response, Google has not commented on these demands. Russian officials argue that such restrictions infringe on the country’s broadcasters and hope the significant financial claims will compel Google to restore access to Russian media content on YouTube.
The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA). According to Paul Gordon, assistant director at Ireland’s media regulator Coimisiúin na Meán, efforts are underway to finalise the transition by January. He emphasised that the new regulations should lead to more meaningful engagement from platforms, moving beyond mere compliance.
Originally established in 2022 and signed by 44 companies, including Google, Meta, and TikTok, the code outlines commitments to combat online disinformation, such as increasing transparency in political advertising and enhancing cooperation during elections. A spokesperson for the European Commission confirmed that the code aims to be recognised as a ‘Code of Conduct’ under the DSA, which already mandates content moderation measures for online platforms.
The DSA, which applies to all platforms since February, imposes strict rules on the largest online services, requiring them to mitigate risks associated with disinformation. The new code will help these platforms demonstrate compliance with the DSA’s obligations, as assessed by the Commission and the European Board of Digital Services. However, no specific timeline has been provided for the code’s formal implementation.
ElonMusk’s social media platform, X, is facing criticism from the Center for Countering Digital Hate (CCDH), which claims its crowd-sourced fact-checking feature, Community Notes, is struggling to curb misinformation on the upcoming US election. According to a CCDH report, out of 283 analysed posts containing misleading information, only 26% showed corrected notes visible to all users, allowing false narratives to reach massive audiences. The 209 uncorrected posts gained over 2.2 billion views, raising concerns over the platform’s commitment to truth and transparency.
Community Notes was launched to empower users to flag inaccurate content. However, critics argue this system alone may be insufficient to handle misinformation during critical events like elections. Calls for X to strengthen its safety measures follow a recent legal loss to CCDH, which faulted the platform for an increase in hate speech. The report also highlights Musk’s endorsement of Republican candidate Donald Trump as a potential complicating factor, since Musk has also been accused of spreading misinformation himself.
In response to the ongoing scrutiny, five US state officials urged Musk in August to address misinformation on X’s AI chatbot, which has reportedly circulated false claims related to the November election. X has yet to respond to these calls for stricter safeguards, and its ability to manage misinformation effectively remains under close watch as the election approaches.
Missouri’s Attorney General Andrew Bailey announced an investigation into Google on Thursday, accusing the tech giant of censoring conservative speech. Bailey’s statement, shared on social media platform X, criticised Google, calling it “the biggest search engine in America,” and alleged that it has engaged in bias during what he referred to as “the most consequential election in our nation’s history.” Bailey did not cite specific examples of censorship, sparking quick dismissal from Google, which labelled the claims “totally false” and maintained its commitment to showing “useful information to everyone—no matter what their political beliefs are.”
Republicans have long contended that major social media platforms and search engines demonstrate an anti-conservative bias, though tech firms like Google have repeatedly denied these allegations. Concerns around this issue have intensified during the 2024 election campaign, especially as social media and online search are seen as significant factors influencing public opinion. Bailey’s investigation is part of a larger wave of Republican-led inquiries into potential online censorship, often focused on claims that conservative voices and views are suppressed.
Adding to these concerns, Donald Trump, the Republican presidential candidate, recently pledged that if he wins the upcoming election, he would push for the prosecution of Google, alleging that its search algorithm unfairly targets him by prioritising negative news stories. Trump has not offered evidence for these claims, and Google has previously stated its search results are generated based on relevance and quality to serve users impartially. As the November 5 election draws near, this investigation highlights the growing tension between Republican officials and major tech platforms, raising questions about how online content may shape future political campaigns.
Thousands of creatives, including Kevin Bacon, Thom Yorke, and Julianne Moore, have signed a petition opposing the unlicensed use of their work to train AI. The 11,500 signatories believe that such practices threaten their livelihoods and call for better protection of creative content.
The petition argues that using creative works without permission for AI development is an ‘unjust threat’ to the people behind those works. Signatories from various industries, including musicians, writers, and actors, are voicing concerns over how their work is being used by AI companies.
British composer Ed Newton-Rex, who organised the petition, has spoken out against AI companies, accusing them of ‘dehumanising’ art by treating it as mere ‘training data’. He highlighted the growing concerns among creatives about how AI may undermine their rights and income.
The United Kingdom government is currently exploring new regulations to address the issue, including a potential ‘opt out’ model for AI data scraping, as lawmakers look for ways to protect creative content in the digital age.
The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.
The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.
The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.
Meta’s Oversight Board has opened a public consultation on immigration-related content that may harm immigrants following two controversial cases on Facebook. The board, which operates independently but is funded by Meta, will assess whether the company’s policies sufficiently protect refugees, migrants, immigrants, and asylum seekers from severe hate speech.
The first case concerns a Facebook post made in May by a Polish far-right coalition, which used a racially offensive term. Despite the post accumulating over 150,000 views, 400 shares, and receiving 15 hate speech reports from users, Meta chose to keep it up following a human review. The second case involves a June post from a German Facebook page that included an image expressing hostility toward immigrants. Meta also upheld its decision to leave this post online after review.
Following the Oversight Board’s intervention, Meta’s experts reviewed both cases again but upheld the initial decisions. Helle Thorning-Schmidt, co-chair of the board, stated that these cases are critical in determining if Meta’s policies are effective and sufficient in addressing harmful content on its platform.