A Moscow court has fined Apple 3.6 million roubles ($36,889) for refusing to remove two podcasts that were reportedly aimed at destabilising Russia’s political landscape, according to the RIA news agency. The court’s decision is part of a larger pattern of the Russian government targeting foreign technology companies for not complying with content removal requests. This action is seen as part of the Kremlin’s broader strategy to exert control over the digital space and reduce the influence of Western tech giants.
Since Russia’s invasion of Ukraine in 2022, the government has intensified its crackdown on foreign tech companies, accusing them of spreading content that undermines Russian authority and sovereignty. The Kremlin has already imposed similar fines on companies like Google and Meta, demanding the removal of content deemed harmful to national security or political stability. Critics argue that these moves are part of an orchestrated effort to suppress dissenting voices and maintain control over information, particularly in the face of growing international scrutiny.
Apple, like other Western companies, has faced mounting pressure to comply with Russia’s increasingly stringent regulations. While the company has largely resisted political content restrictions in other regions, the fine highlights the challenges it faces in operating within Russia’s tightly controlled media environment. Apple has not yet publicly commented on the ruling, but the decision reflects the growing risks for tech firms doing business in Russia as the country tightens its grip on digital platforms.
Aravind Srinivas, CEO of AI search company Perplexity, offered to step in and support New York Times operations amid a looming strike by the newspaper’s tech workers. The NYT Tech Guild announced the planned strike for November 4 after months of seeking better pay and working conditions. Representing workers involved in software support and data analysis on the business side, the guild has requested a 2.5% annual wage increase and to secure a two-day in-office work policy.
As tensions escalated, New York Times publisher AG Sulzberger called the timing of the strike ‘troubling’, noting that the paper’s election coverage is a public service at a crucial time. Responding publicly, Srinivas offered to help ensure uninterrupted access to the Times’s election news, sparking controversy as critics accused him of ‘scabbing’, a term for working in place of striking employees.
Srinivas clarified that his intent was to provide infrastructure support, not replace journalists, as his company has recently launched its own election information platform. However, the New York Times and Perplexity have been at odds recently, with the Times issuing a cease-and-desist letter last month over Perplexity’s alleged scraping of its content for AI use.
Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.
Meanwhile, in Mauritius, the government imposed a similar blackout until after the 10 November election, following a wiretapping scandal involving leaked conversations of high-profile figures. Furthermore, digital rights groups such as Access Now and the #KeepItOn coalition have condemned these actions, arguing that they violate international human rights standards, including the African Commission on Human and Peoples’ Rights (ACHPR) resolution 580 and the International Covenant on Civil and Political Rights (ICCPR), as well as national constitutions.
In response, digital rights advocates are calling on telecommunications providers, including Emtel and Mauritius Telecom, to resist government orders to enforce the shutdowns. By maintaining internet connectivity, these companies could help preserve citizens’ access to information and uphold democratic principles in politically sensitive times.
Additionally, rights organisations argue that internet service providers have a unique role in supporting transparency and accountability, which are vital to democratic societies.
The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage. Google blocked more than a thousand Russian channels and over 5.5 million videos, including state-funded media, after halting ad services in Russia following its invasion of Ukraine in 2022.
Russia’s legal actions against Google, initiated by 17 Russian TV channels, have led to compound fines based on the company’s revenue in Russia, accumulating to a staggering figure reportedly in the “undecillions,” according to Russian media. Kremlin spokesperson Dmitry Peskov described this enormous number as symbolic but urged Google to take these legal pressures seriously and reconsider its restrictions.
In response, Google has not commented on these demands. Russian officials argue that such restrictions infringe on the country’s broadcasters and hope the significant financial claims will compel Google to restore access to Russian media content on YouTube.
ElonMusk’s social media platform, X, is facing criticism from the Center for Countering Digital Hate (CCDH), which claims its crowd-sourced fact-checking feature, Community Notes, is struggling to curb misinformation on the upcoming US election. According to a CCDH report, out of 283 analysed posts containing misleading information, only 26% showed corrected notes visible to all users, allowing false narratives to reach massive audiences. The 209 uncorrected posts gained over 2.2 billion views, raising concerns over the platform’s commitment to truth and transparency.
Community Notes was launched to empower users to flag inaccurate content. However, critics argue this system alone may be insufficient to handle misinformation during critical events like elections. Calls for X to strengthen its safety measures follow a recent legal loss to CCDH, which faulted the platform for an increase in hate speech. The report also highlights Musk’s endorsement of Republican candidate Donald Trump as a potential complicating factor, since Musk has also been accused of spreading misinformation himself.
In response to the ongoing scrutiny, five US state officials urged Musk in August to address misinformation on X’s AI chatbot, which has reportedly circulated false claims related to the November election. X has yet to respond to these calls for stricter safeguards, and its ability to manage misinformation effectively remains under close watch as the election approaches.
Missouri’s Attorney General Andrew Bailey announced an investigation into Google on Thursday, accusing the tech giant of censoring conservative speech. Bailey’s statement, shared on social media platform X, criticised Google, calling it “the biggest search engine in America,” and alleged that it has engaged in bias during what he referred to as “the most consequential election in our nation’s history.” Bailey did not cite specific examples of censorship, sparking quick dismissal from Google, which labelled the claims “totally false” and maintained its commitment to showing “useful information to everyone—no matter what their political beliefs are.”
Republicans have long contended that major social media platforms and search engines demonstrate an anti-conservative bias, though tech firms like Google have repeatedly denied these allegations. Concerns around this issue have intensified during the 2024 election campaign, especially as social media and online search are seen as significant factors influencing public opinion. Bailey’s investigation is part of a larger wave of Republican-led inquiries into potential online censorship, often focused on claims that conservative voices and views are suppressed.
Adding to these concerns, Donald Trump, the Republican presidential candidate, recently pledged that if he wins the upcoming election, he would push for the prosecution of Google, alleging that its search algorithm unfairly targets him by prioritising negative news stories. Trump has not offered evidence for these claims, and Google has previously stated its search results are generated based on relevance and quality to serve users impartially. As the November 5 election draws near, this investigation highlights the growing tension between Republican officials and major tech platforms, raising questions about how online content may shape future political campaigns.
Perplexity has vowed to contest the copyright infringement claims filed by Dow Jones and the New York Post. The California-based AI company denied the accusations in a blog post, calling them misleading. News Corp, owner of both media entities, launched the lawsuit on Monday, accusing Perplexity of extensive illegal copying of its content.
The conflict began after the two publishers allegedly contacted Perplexity in July with concerns over unauthorised use of their work, proposing a licensing agreement. According to Perplexity, the startup replied the same day, but the media companies decided to move forward with legal action instead of continuing discussions.
CEO Aravind Srinivas expressed his surprise over the lawsuit at the WSJ Tech Live event on Wednesday, noting the company had hoped for dialogue instead. He emphasised Perplexity’s commitment to defending itself against what it considers an unwarranted attack.
Perplexity is challenging Google’s dominance in the search engine market by providing summarised information from trusted sources directly through its platform. The case reflects ongoing tensions between publishers and tech firms over the use of copyrighted content for AI development.
The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.
The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.
The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.
Meta’s Oversight Board has opened a public consultation on immigration-related content that may harm immigrants following two controversial cases on Facebook. The board, which operates independently but is funded by Meta, will assess whether the company’s policies sufficiently protect refugees, migrants, immigrants, and asylum seekers from severe hate speech.
The first case concerns a Facebook post made in May by a Polish far-right coalition, which used a racially offensive term. Despite the post accumulating over 150,000 views, 400 shares, and receiving 15 hate speech reports from users, Meta chose to keep it up following a human review. The second case involves a June post from a German Facebook page that included an image expressing hostility toward immigrants. Meta also upheld its decision to leave this post online after review.
Following the Oversight Board’s intervention, Meta’s experts reviewed both cases again but upheld the initial decisions. Helle Thorning-Schmidt, co-chair of the board, stated that these cases are critical in determining if Meta’s policies are effective and sufficient in addressing harmful content on its platform.
A recent Microsoft report claims that Russia, China, and Iran are increasingly collaborating with cybercriminals to conduct cyber espionage and hacking operations. This partnership blurs the lines between state-directed activities and the illicit financial pursuits typical of criminal networks. National security experts emphasise that this collaboration allows governments to amplify their cyber capabilities without incurring additional costs while offering criminals new profit avenues and the security of government protection.
The report, which analyses cyber threats from July 2023 to June 2024, highlights the significant increase in cyber incidents, with Microsoft reporting over 600 million attacks daily. Russia has focused its efforts primarily on Ukraine, attempting to infiltrate military and governmental systems while spreading disinformation to weaken international support. Meanwhile, as the US election approaches, both Russia and Iran are expected to intensify their cyber operations aimed at American voters.
Despite allegations, countries like China, Russia, and Iran have denied collaborating with cybercriminals. China’s embassy in Washington dismissed these claims as unfounded, asserting that the country actively opposes cyberattacks. Efforts to combat foreign disinformation are increasing, yet the fluid nature of the internet complicates these initiatives, as demonstrated by the rapid resurgence of websites previously seized by US authorities.
Overall, the evolving landscape of cyber threats underscores the growing interdependence between state actors and cybercriminals, posing significant risks to national security and public trust.