YouTube has terminated the Tenet Media channel and four other channels run by its owner, Lauren Chen, after an indictment by the US Department of Justice. The Justice Department filed money-laundering charges against two employees of Russian state media network RT, accusing them of using shell companies to funnel $10 million to an unnamed US company to produce online content aimed at influencing the 2024 presidential election.
Prosecutors said the accused used fake identities to hire an American firm to create videos designed to deepen political divides in the United States. Though the company was not identified by name, court details point to Tenet Media, a Nashville-based organisation responsible for nearly 2,000 YouTube videos in under a year.
Tenet Media did not respond to requests for comment after its channels were removed by YouTube.
The indictment reflects growing concerns about foreign interference in the United States elections, with platforms like YouTube taking action against channels involved in such activities.
New Mexico has filed a lawsuit against Snap Inc, alleging that Snapchat’s design facilitates the sharing of child sexual exploitation material. Attorney General Raul Torrez stated that a months-long investigation found Snapchat to be a key platform for sextortion, where predators coerce minors into sending explicit content.
Snap said it is reviewing the complaint and will respond in court. The company has invested significant funds into trust and safety measures and continues to work with law enforcement and safety experts to combat such issues.
Snapchat is widely used by teens due to its disappearing message feature, which has been criticised for misleading users. According to Torrez, predators can permanently capture the content, creating a virtual collection of child sexual images that are shared indefinitely.
Investigators opened a decoy Snapchat account as part of the investigation, discovering 10,000 records of child sexual abuse material on the dark web. Snapchat was identified as a major source for such content in these sites. New Mexico also sued Meta last December for similar reasons.
Brazil’s Supreme Court is set to make the final decision on a case involving the suspension of social media platform X, formerly known as Twitter. The case, brought forward by the conservative political party Partido Novo, challenges Justice Alexandre de Moraes’ order to block the platform for failing to comply with court orders to remove accounts spreading misinformation and hate speech. The suspension has sparked a national debate, with many viewing it as a fight over freedom of expression and the rule of law.
Justice Kassio Nunes Marques referred the case to the full Supreme Court, highlighting its significance for public and social order. However, Marques may still issue an individual ruling before the case reaches all 11 justices. In the meantime, Brazil’s attorney general’s office has been asked to provide its opinion on the matter.
The suspension has divided public opinion in Brazil, with a slight majority supporting Moraes’ decision, while others, including Elon Musk, owner of X, have accused the judge of overreach. Musk has called Moraes a ‘dictator’ and criticised the freezing of assets tied to his satellite firm, Starlink, which was implemented to cover potential fines. Brazil’s president, Luiz Inacio Lula da Silva, has supported the ban, reinforcing the need for tech companies to respect local laws.
Meta’s Oversight Board has advised the Facebook parent company not to automatically remove the phrase ‘From the river to the sea’, which is interpreted by some as a show of solidarity with Palestinians and by others as antisemitic. The board determined that the phrase holds multiple meanings and cannot be universally deemed harmful or violent.
The phrase refers to the region between the River Jordan and the Mediterranean Sea, encompassing Israel and the Palestinian territories. Often used at pro-Palestinian rallies, critics argue it calls for Israel’s destruction, while others dispute this interpretation. The board emphasised the importance of context in assessing such political speech, urging Meta to allow space for debate, particularly during times of conflict.
Meta expressed support for the board’s review, acknowledging the complexities involved in global content moderation. However, the Anti-Defamation League criticised the decision, saying the phrase makes Jewish and pro-Israel communities feel unsafe. The Oversight Board also called on Meta to restore data access for researchers and journalists following its recent decision to end the CrowdTangle tool.
The board’s ruling highlights the ongoing challenges in regulating sensitive content on social media platforms, with a need for balancing free speech and community safety.
Australia’s government is advancing its AI regulation framework with new rules focusing on human oversight and transparency. Industry and Science Minister Ed Husic announced that the guidelines aim to ensure that AI systems have human intervention capabilities throughout their lifecycle to prevent unintended consequences or harm. These guidelines, though currently voluntary, are part of a broader consultation to determine if they should become mandatory in high-risk settings.
The following initiative follows rising global concerns about the role of AI in spreading misinformation and fake news, fueled by the growing use of generative AI models like OpenAI’s ChatGPT and Google’s Gemini. In response, other regions, such as the European Union, have already enacted more comprehensive AI laws to address these challenges.
Australia’s existing AI regulations, first introduced in 2019, were criticised for being insufficient for high-risk scenarios. Ed Husic emphasised that only about one-third of businesses use AI responsibly, underscoring the need for stronger measures to ensure safety, fairness, accountability, and transparency.
The US government indicted two Russian nationals and seized over 30 internet domains on Wednesday, disrupting an operation aimed at influencing the American election. However, an extensive FBI dossier revealed a broader Russian campaign targeting political and social stability in Europe. The 277-page affidavit detailed plans to manipulate politicians, businesspeople, journalists, and influencers in Germany, France, Italy, and the UK, with the Kremlin intending to sow division, discredit the US, and undermine support for Ukraine.
Documents showed the Social Design Agency, under the directive of Sergey Kiriyenko, Deputy Chief of Staff to President Vladimir Putin, orchestrated these efforts. The agency used real posts on social media to bypass bot filters and created ‘doppelgänger domains’ that mimicked reputable media outlets like Reuters and Le Monde to spread fake news. Funded by cryptocurrencies such as bitcoin, these sophisticated methods aimed to provoke rational and emotional anti-West sentiments, questioning the necessity of supporting Ukraine and criticising Americans.
Germany was identified as particularly vulnerable due to its economic ties with Russia. Russian memos stressed discrediting the USA, Great Britain, and NATO, while convincing Germans to oppose sanctions.
Another operation, ‘International Conflict Incitement,’ focused on escalating tensions in France and Germany, using fake articles and targeted social media posts to create conflicts and destabilise these societies.
Why does it matter?
The findings underscore how pervasive strategic manipulation of public opinion through sophisticated cyber operations is. Through FBI evidence, the depth and breadth of these influence operations to escalate internal tensions and to promote the interests of the Russian Federation are made clear, highlighting ongoing geopolitical tensions and the sophisticated nature of modern information warfare.
The ruling addresses how Meta should handle posts concerning state-supported armed groups, known as ‘colectivos’. This follows Meta’s request for guidance on moderating increasing volumes of ‘anti-colectivos content’, highlighting two specific posts for review: an Instagram post saying ‘Go to hell! I hope they kill you all!’ aimed at the colectivos, and a Facebook post criticising Venezuela’s security forces, stating ‘kill those damn colectivos’.
The Oversight Board determined that neither post violated Meta’s rules on calls for violence, instead categorising both as ‘aspirational statements’ from citizens facing severe repression and threats to free expression from state-supported forces. The board justified this by noting the colectivos’ role in repressing civic space and committing human rights violations in Venezuela, particularly during the current post-election crisis. The board emphasised that the civilian population is predominantly the target of such abuses.
Additionally, the board critiqued Meta’s practice of making political content less visible across its platforms during critical times, expressing concerns that this could undermine users’ ability to express political dissent and raise awareness about the situation in Venezuela. It recommended that Meta adapt its policies to ensure political content, especially during crises like elections and post-electoral protests, receives the same reach as non-political content. This adjustment is vital for enabling citizens to share and amplify their political grievances during significant socio-political turmoil.
Why does it matter?
This decision is part of an ongoing debate about the role of political content on Meta’s platforms. Earlier this year, the board accepted its first case related to a post on Threads, another Meta service, focusing on the company’s decision to limit recommendations of political posts. The outcome of this related case is still pending, signalling potential further policy changes regarding political content on Meta’s platforms. The board’s decision underscores the critical role of context in content moderation, particularly in regions experiencing significant political and social upheaval.
Elon Musk’s X secured a legal victory on Wednesday when a US appeals court partially blocked a California law regulating how social media companies manage disinformation, hate speech, and extremism. The ruling, issued by a three-judge panel from the 9th US Circuit Court of Appeals in San Francisco, overturned a previous decision allowing the law to go into effect.
The contested California law mandates that large social media platforms publicly disclose their content moderation policies and provide reports on how they address objectionable posts. Musk had challenged the law, arguing that it infringed on the platform’s First Amendment rights, which protect free speech under the US Constitution.
Initially, US District Judge William Shubb denied Musk’s request to block the law, ruling that it was not overly burdensome concerning First Amendment concerns. However, the appeals court took a different stance, finding that the requirements placed on social media companies were ‘more extensive than necessary’ and suggesting that the law overreached in its effort to enforce transparency.
Why does it matter?
The decision is part of a larger legal debate over how far states can regulate social media platforms. Musk’s case joins other legal battles in Texas and Florida, where similar laws are being contested because they may violate free speech protections. The US Supreme Court has directed lower courts to revisit these cases.
The next step for X’s lawsuit is a further review by the lower court, which must now decide if the content moderation section of the California law can be separated from its other provisions.
The Telecom Regulatory Authority of India (TRAI) and Google have introduced new regulations to enhance user security and reduce spam. These changes are particularly significant for mobile users in India, focusing on improving the safety of online transactions and the quality of applications available for download. By implementing these measures, TRAI and Google are taking proactive steps to safeguard digital interactions, ensuring users can navigate their smartphones with greater confidence and security.
A key component of this initiative is TRAI’s new directive to combat spam calls and fraudulent messages. That regulation requires telecom operators to block unregistered numbers immediately, which is intended to protect users from scams. However, this measure may delay receiving one-time passwords (OTPs) during online transactions, as institutions like banks must register and allow their numbers to continue sending OTPs without interruption. While this could cause minor inconveniences, it is a crucial step toward preventing fraudulent activities and enhancing overall security for users.
In conjunction with TRAI’s efforts, Google has ramped up its policies to remove low-quality and potentially harmful apps from its Play Store. The following initiative aims to mitigate risks associated with malware and ensure that only trustworthy applications are accessible to users. By eliminating these problematic apps, Google creates a safer environment for users to download and use applications without compromising their personal information. The crackdown on low-quality apps is expected to significantly reduce the risk of malware, providing a more secure digital experience for all users.
Elon Musk’s Starlink has become entangled in a legal dispute with Brazil, as the company reluctantly complies with a court order to block access to the country’s social media platform X. The compliance comes just a day after Starlink initially informed Brazil’s telecom regulator, Anatel, that it would defy the order, setting up a clash with the Brazilian judiciary. The legal battle is centred around actions by Supreme Court judge Alexandre de Moraes, who ordered the freezing of Starlink’s accounts as a precaution against unpaid fines owed by X and, thus, by Musk.
The conflict escalated after Moraes directed all internet providers in Brazil to block access to X, citing the platform’s failure to maintain a legal representative, which was one of the conditions imposed by the court. The decision, which was upheld by a panel of Supreme Court justices, has led to the platform’s shutdown in Brazil. Despite initial resistance, Starlink reversed its stance and began implementing the block, with Anatel confirming that access to X has already started being cut off.
To our customers in Brazil (who may not be able to read this as a result of X being blocked by @alexandre):
The Starlink team is doing everything possible to keep you connected.
Following last week’s order from @alexandre that froze Starlink’s finances and prevents Starlink…
Starlink, which serves over 200,000 customers in Brazil, expressed its discontent with the situation in a post on X, labelling the freezing of its assets as illegal. The company has initiated legal proceedings in the Brazilian Supreme Court, arguing that Moraes’ orders violate the Brazilian constitution. However, Starlink missed a deadline to file a new appeal against the asset freeze, leaving its next legal steps uncertain.
The standoff highlights broader tensions between Musk and the Brazilian judiciary, raising concerns about the balance between state power and the protection of free speech. Musk’s pushback against what he views as government overreach has now turned into an ardent legal battle, with potential implications for internet freedom and the role of tech companies in upholding or challenging state authorities.