Serbia has been accused of using spyware to target journalists and activists, according to a new Amnesty International report. Investigations revealed that ‘NoviSpy,’ a homegrown spyware, extracted private data from devices and uploaded it to a government-controlled server. Some cases also involved the use of technology provided by Israeli firm Cellebrite to unlock phones before infecting them.
Activists reported unusual phone activity following meetings with Serbian authorities. Forensic experts confirmed NoviSpy exported contact lists and private photos to state-controlled servers. The Serbian government has yet to respond to requests for comment regarding these allegations.
Cellebrite, whose phone-cracking devices are widely used by law enforcement worldwide, stated it is investigating the claims. The company’s representative noted that misuse of their technology could violate end-user agreements, potentially leading to a suspension of use by Serbian officials.
Concerns over these practices are heightened due to Serbia’s EU integration programme, partially funded by Norway and administered by the UN Office for Project Services (UNOPS). Norway expressed alarm over the findings and plans to meet with Serbian authorities and UNOPS for clarification.
The imminent adoption of a new UN cybercrime convention by the General Assembly has sparked significant concerns over its implications for global digital rights, particularly in the Arab region. Critics argue that the convention, as currently drafted, lacks sufficient human rights safeguards, potentially empowering authoritarian regimes to suppress dissent both domestically and internationally.
In the Arab region, existing cybercrime laws often serve as tools to curb freedom of expression, with vague terms criminalising online speech that might undermine state prestige or harm public morals. These restrictions contravene Article 19 of the International Covenant on Civil and Political Rights, which requires limitations on expression to be lawful, necessary, and proportionate.
Such ambiguity in legal language fosters an environment of self-censorship, as individuals remain uncertain about the legal interpretation of their online content. The convention’s broad scope further alarms international cooperation in cases potentially infringing human rights. It allows for the collection of electronic evidence for ‘serious crimes,’ which are vaguely defined and could include acts like defamation or expressions of sexual orientation—punishable by severe penalties in some countries.
That provision risks enabling extensive surveillance and data-sharing among nations with weak human rights records. In the Arab region, existing cybercrime laws already permit intrusive surveillance and mass data collection without adequate safeguards, threatening individuals’ privacy rights. Countries like Tunisia and Palestine lack mechanisms to notify individuals after surveillance, removing their ability to seek redress for legal violations and exacerbating privacy concerns.
In light of these issues, Access Now and civil society organisations are urging UN member states to critically evaluate the convention and resist voting for its adoption in its current form. They recommend thorough national discussions to assess its human rights impacts and call for stronger safeguards in future negotiations.
Why does it matter?
Arab states are encouraged to align their cybercrime laws with international standards and engage civil society in discussions to demonstrate a genuine commitment to human rights. The overarching message is clear: without comprehensive reforms, the convention risks further eroding digital rights and undermining freedom of expression worldwide. It is imperative to ensure that any international treaty robustly protects human rights rather than enabling their violation under the guise of combating cybercrime.
AI startup Perplexity has expanded its publisher partnerships, adding media outlets such as the Los Angeles Times and The Independent. These new partners will benefit from a program that shares ad revenue when their content is referenced on the platform. The initiative also provides publishers with access to Perplexity’s API and analytics tools, enabling them to track content performance and trends.
The program, launched in July, has attracted notable partners from Japan, Spain, and Latin America, including Prisa Media and Newspicks. Existing collaborators include TIME, Der Spiegel, and Fortune. Perplexity highlighted the importance of diverse media representation, stating that the partnerships enhance the accuracy and depth of its AI-powered responses.
Backed by Amazon founder Jeff Bezos and Nvidia, Perplexity aims to challenge Google’s dominance in the search engine market. The company has also begun testing advertising on its platform, seeking to monetise its AI search capabilities.
Perplexity’s growth has not been without challenges. It faces lawsuits from News Corp-owned publishers, including Dow Jones and New York Post, over alleged copyright violations. The New York Times has also issued a cease-and-desist notice, demanding the removal of its content from Perplexity’s generative AI tools.
Spanish newspaper La Vanguardia has announced it will stop posting on X, formerly known as Twitter, citing growing concerns over hate speech, disinformation, and toxic content. The paper, Spain’s fourth most-read publication, criticised the platform’s moderation failures under Elon Musk, claiming it has become an “echo chamber” for conspiracy theories and bots.
The decision follows similar moves by Britain’sThe Guardian and highlights growing alarm about X’s role in amplifying harmful narratives, especially amid sensitive events such as Spain’s recent floods. La Vanguardia editor Jordi Juan suspended his personal account, calling the platform’s content increasingly manipulative and profit-driven.
Since Musk’s acquisition of X in 2022, the platform has faced criticism for tolerating misinformation and hate, allegedly to boost ad revenue. The paper noted that X has left key European Union disinformation programs, further eroding trust. While journalists will retain personal accounts, the newspaper itself will suspend activity, preserving its 1.7M-follower archive for historical purposes.
A Moscow court has fined Apple 3.6 million roubles ($36,889) for refusing to remove two podcasts that were reportedly aimed at destabilising Russia’s political landscape, according to the RIA news agency. The court’s decision is part of a larger pattern of the Russian government targeting foreign technology companies for not complying with content removal requests. This action is seen as part of the Kremlin’s broader strategy to exert control over the digital space and reduce the influence of Western tech giants.
Since Russia’s invasion of Ukraine in 2022, the government has intensified its crackdown on foreign tech companies, accusing them of spreading content that undermines Russian authority and sovereignty. The Kremlin has already imposed similar fines on companies like Google and Meta, demanding the removal of content deemed harmful to national security or political stability. Critics argue that these moves are part of an orchestrated effort to suppress dissenting voices and maintain control over information, particularly in the face of growing international scrutiny.
Apple, like other Western companies, has faced mounting pressure to comply with Russia’s increasingly stringent regulations. While the company has largely resisted political content restrictions in other regions, the fine highlights the challenges it faces in operating within Russia’s tightly controlled media environment. Apple has not yet publicly commented on the ruling, but the decision reflects the growing risks for tech firms doing business in Russia as the country tightens its grip on digital platforms.
Aravind Srinivas, CEO of AI search company Perplexity, offered to step in and support New York Times operations amid a looming strike by the newspaper’s tech workers. The NYT Tech Guild announced the planned strike for November 4 after months of seeking better pay and working conditions. Representing workers involved in software support and data analysis on the business side, the guild has requested a 2.5% annual wage increase and to secure a two-day in-office work policy.
As tensions escalated, New York Times publisher AG Sulzberger called the timing of the strike ‘troubling’, noting that the paper’s election coverage is a public service at a crucial time. Responding publicly, Srinivas offered to help ensure uninterrupted access to the Times’s election news, sparking controversy as critics accused him of ‘scabbing’, a term for working in place of striking employees.
Srinivas clarified that his intent was to provide infrastructure support, not replace journalists, as his company has recently launched its own election information platform. However, the New York Times and Perplexity have been at odds recently, with the Times issuing a cease-and-desist letter last month over Perplexity’s alleged scraping of its content for AI use.
Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.
Meanwhile, in Mauritius, the government imposed a similar blackout until after the 10 November election, following a wiretapping scandal involving leaked conversations of high-profile figures. Furthermore, digital rights groups such as Access Now and the #KeepItOn coalition have condemned these actions, arguing that they violate international human rights standards, including the African Commission on Human and Peoples’ Rights (ACHPR) resolution 580 and the International Covenant on Civil and Political Rights (ICCPR), as well as national constitutions.
In response, digital rights advocates are calling on telecommunications providers, including Emtel and Mauritius Telecom, to resist government orders to enforce the shutdowns. By maintaining internet connectivity, these companies could help preserve citizens’ access to information and uphold democratic principles in politically sensitive times.
Additionally, rights organisations argue that internet service providers have a unique role in supporting transparency and accountability, which are vital to democratic societies.
The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage. Google blocked more than a thousand Russian channels and over 5.5 million videos, including state-funded media, after halting ad services in Russia following its invasion of Ukraine in 2022.
Russia’s legal actions against Google, initiated by 17 Russian TV channels, have led to compound fines based on the company’s revenue in Russia, accumulating to a staggering figure reportedly in the “undecillions,” according to Russian media. Kremlin spokesperson Dmitry Peskov described this enormous number as symbolic but urged Google to take these legal pressures seriously and reconsider its restrictions.
In response, Google has not commented on these demands. Russian officials argue that such restrictions infringe on the country’s broadcasters and hope the significant financial claims will compel Google to restore access to Russian media content on YouTube.
ElonMusk’s social media platform, X, is facing criticism from the Center for Countering Digital Hate (CCDH), which claims its crowd-sourced fact-checking feature, Community Notes, is struggling to curb misinformation on the upcoming US election. According to a CCDH report, out of 283 analysed posts containing misleading information, only 26% showed corrected notes visible to all users, allowing false narratives to reach massive audiences. The 209 uncorrected posts gained over 2.2 billion views, raising concerns over the platform’s commitment to truth and transparency.
Community Notes was launched to empower users to flag inaccurate content. However, critics argue this system alone may be insufficient to handle misinformation during critical events like elections. Calls for X to strengthen its safety measures follow a recent legal loss to CCDH, which faulted the platform for an increase in hate speech. The report also highlights Musk’s endorsement of Republican candidate Donald Trump as a potential complicating factor, since Musk has also been accused of spreading misinformation himself.
In response to the ongoing scrutiny, five US state officials urged Musk in August to address misinformation on X’s AI chatbot, which has reportedly circulated false claims related to the November election. X has yet to respond to these calls for stricter safeguards, and its ability to manage misinformation effectively remains under close watch as the election approaches.
Missouri’s Attorney General Andrew Bailey announced an investigation into Google on Thursday, accusing the tech giant of censoring conservative speech. Bailey’s statement, shared on social media platform X, criticised Google, calling it “the biggest search engine in America,” and alleged that it has engaged in bias during what he referred to as “the most consequential election in our nation’s history.” Bailey did not cite specific examples of censorship, sparking quick dismissal from Google, which labelled the claims “totally false” and maintained its commitment to showing “useful information to everyone—no matter what their political beliefs are.”
Republicans have long contended that major social media platforms and search engines demonstrate an anti-conservative bias, though tech firms like Google have repeatedly denied these allegations. Concerns around this issue have intensified during the 2024 election campaign, especially as social media and online search are seen as significant factors influencing public opinion. Bailey’s investigation is part of a larger wave of Republican-led inquiries into potential online censorship, often focused on claims that conservative voices and views are suppressed.
Adding to these concerns, Donald Trump, the Republican presidential candidate, recently pledged that if he wins the upcoming election, he would push for the prosecution of Google, alleging that its search algorithm unfairly targets him by prioritising negative news stories. Trump has not offered evidence for these claims, and Google has previously stated its search results are generated based on relevance and quality to serve users impartially. As the November 5 election draws near, this investigation highlights the growing tension between Republican officials and major tech platforms, raising questions about how online content may shape future political campaigns.