Graphite spyware used against European reporters, experts warn

A new surveillance scandal has emerged in Europe as forensic evidence confirms that an Israeli spyware firm Paragon used its Graphite tool to target journalists through zero-click attacks on iOS devices. The attacks, requiring no user interaction, exposed sensitive communications and location data.

Citizen Lab and reports from Schneier on Security identified the spyware on multiple journalists’ devices on April 29, 2025. The findings mark the first confirmed use of Paragon’s spyware against members of the press, raising alarms over digital privacy and press freedom.

Backed by US investors, Paragon has operated outside of Israel under claims of aiding national security. But its spyware is now at the center of a widening controversy, particularly in Italy, where the government recently ended its contract with the company after two journalists were targeted.

Experts warn that such attacks undermine the confidentiality crucial to journalism and could erode democratic safeguards. Even Apple’s secure devices proved vulnerable, according to Bleeping Computer, highlighting the advanced nature of Graphite.

The incident has sparked calls for tighter international regulation of spyware firms. Without oversight, critics argue, tools meant for fighting crime risk being used to silence dissent and target civil society.

The Paragon case underscores the urgent need for transparency, accountability, and stronger protections in an age of powerful, invisible surveillance tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rights groups condemn Jordan’s media crackdown

At least 12 independent news websites in Jordan have been blocked by the authorities without any formal legal justification or opportunity for appeal. Rights groups have condemned the move as a serious violation of constitutional and international protections for freedom of expression.

The Jordanian Media Commission issued the directive on 14 May 2025, citing vague claims such as ‘spreading media poison’ and ‘targeting national symbols’, without providing evidence or naming the sites publicly.

The timing of the ban suggests it was a retaliatory act against investigative reports alleging profiteering by state institutions in humanitarian aid efforts to Gaza. Affected outlets were subjected to intimidation, and the blocks were imposed without judicial oversight or a transparent legal process.

Observers warn this sets a dangerous precedent, reflecting a broader pattern of repression under Jordan’s Cybercrime Law No. 17 of 2023, which grants sweeping powers to restrict online speech.

Civil society organisations call for the immediate reversal of the ban, transparency over its legal basis, and access to judicial remedies for affected platforms.

They urge a comprehensive review of the cybercrime law to align it with international human rights standards. Press freedom, they argue, is a pillar of democratic society and must not be sacrificed under the guise of combating disinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram founder Durov to address Oslo Freedom Forum remotely amid legal dispute

Telegram founder Pavel Durov will deliver a livestreamed keynote at the Oslo Freedom Forum, following a French court decision barring him from international travel. The Human Rights Foundation (HRF), which organizes the annual event, expressed disappointment at the court’s ruling.

Durov, currently under investigation in France, was arrested in August 2024 on charges related to child sexual abuse material (CSAM) distribution and failure to assist law enforcement.

He was released on €5 million bail but ordered to remain in the country and report to police twice a week. Durov maintains the charges are unfounded and says Telegram complies with law enforcement when possible.

Recently, Durov accused French intelligence chief Nicolas Lerner of pressuring him to censor political voices ahead of elections in Romania. France’s DGSE denies the allegation, saying meetings with Durov focused solely on national security threats.

The claim has sparked international debate, with figures like Elon Musk and Edward Snowden defending Durov’s stance on free speech.

Supporters say the legal action against Durov may be politically motivated and warn it could set a dangerous precedent for holding tech executives accountable for user content. Critics argue Telegram must do more to moderate harmful material.

Despite legal restrictions, HRF says Durov’s remote participation is vital for ongoing discussions around internet freedom and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI site faces backlash for copying Southern Oregon news

A major publishing organisation has issued a formal warning to Good Daily News, an AI-powered news aggregator, demanding it cease the unauthorised scraping of content from local news outlets across Southern Oregon and beyond. The News Media Alliance, which represents 2,200 publishers, sent the letter on 25 March, urging the national operator to respect publishers’ rights and stop reproducing material without permission.

Good Daily runs over 350 online ‘local’ news websites across 47 US states, including Daily Medford and Daily Salem in Oregon. Though the platforms appear locally based, they are developed using AI and managed by one individual, Matt Henderson, who has registered mailing addresses in both Ashland, Oregon and Austin, Texas. Content is reportedly scraped from legitimate local news sites, rewritten by AI, and shared in newsletters, sometimes with source links, but often without permission.

News Media Alliance president Danielle Coffey said such practices undermine the time, resources, and revenue of local journalism. Many publishers use digital tools to block automated scrapers, though this comes at a financial cost. The organisation is working with the Oregon Newspaper Publishers Association and exploring legal options. Others in the industry, including Heidi Wright of the Fund for Oregon Rural Journalism, have voiced strong support for the warning, calling for greater action to defend the integrity of local news.

For more information on these topics, visit diplomacy.edu.

Russia fines Telegram over extremist content

A Moscow court has fined the messaging platform Telegram 7 million roubles (approximately $80,000) for failing to remove content allegedly promoting terrorist acts and inciting anti-government protests, according to TASS (Russian state news agency).

The court ruled that Telegram did not comply with legal obligations to take down materials deemed extremist, including calls to sabotage railway systems in support of Ukrainian forces and to overthrow the Russian government.

The judgement cited specific Telegram channels accused of distributing such content. Authorities argue that these channels played a role in encouraging public unrest and potentially supporting hostile actions against the Russian state.

The decision adds to the long-standing tension between Russia’s media watchdogs and Telegram, which remains one of the most widely used messaging platforms across Russia and neighbouring countries.

Telegram has not stated in response to the fine, and it is unclear whether the company plans to challenge the court’s ruling. 

The platform was founded by Russian-born entrepreneur Pavel Durov and is currently headquartered in Dubai, boasting close to a billion users globally. 

Telegram’s decentralised nature and encrypted messaging features have made it popular among users seeking privacy, but it has also drawn criticism from governments citing national security concerns.

Durov himself returned to Dubai in March after months in France following his 2024 arrest linked to accusations that Telegram was used in connection with fraud, money laundering, and the circulation of illegal content.

Although he has denied any wrongdoing, the incident has further strained the company’s relationship with authorities in Russia.

This latest legal action reflects Russia’s ongoing crackdown on digital platforms accused of facilitating dissent or undermining state control.

With geopolitical tensions still high, especially surrounding the conflict in Ukraine, platforms like Telegram face increasing scrutiny and legal pressure in multiple jurisdictions.

X’s Türkiye tangle, between freedom of speech, control, and digital defiance

In the streets of Istanbul and beyond, a storm of unrest swept Türkiye in the past week, sparked by the arrest of Istanbul Mayor Ekrem İmamoğlu, a political figure whose detention has provoked nationwide protests. Amid these events, a digital battlefield has emerged, with X, the social media platform helmed by Elon Musk, thrust into the spotlight. 

Global news reveals that X has suspended many accounts linked to activists and opposition voices sharing protest details. Yet, a twist: X has also publicly rebuffed a Turkish government demand to suspend ‘over 700 accounts,’ vowing to defend free speech. 

This clash between compliance and defiance offers a vivid example of the controversy around freedom of speech and content policy in the digital age, where global platforms, national power, and individual voices collide like tectonic plates on a restless earth.

The spark: protests and a digital crackdown

The unrest began with İmamoğlu’s arrest, a move many saw as a political jab by President Recep Tayyip Erdoğan’s government against a prominent rival. As tear gas clouded the air and chants echoed through Turkish cities, protesters turned to X to organise, share live updates, and amplify their dissent. University students, opposition supporters, and grassroots activists flooded the platform with hashtags and footage: raw, unfiltered glimpses of a nation at odds with itself. But this digital megaphone didn’t go unnoticed. Turkish authorities pinpointed 326 accounts for the takedown, accusing them of ‘inciting hatred’ and destabilising order. X’s response? X has partially fulfilled the Turkish authorities’ alleged requests by ‘likely’ suspending many accounts.

The case isn’t the first where Türkish authorities require platforms to take action. For instance, during the 2013 Gezi Park protests, Twitter (X’s predecessor) faced similar requests. Erdoğan’s administration has long wielded legal provisions like Article 299 of the Penal Code (insulting the president) as a measure of fining platforms that don’t align with the government content policy. Freedom House’s 2024 report labels the country’s internet freedom as ‘not free,’ citing a history of throttling dissent online. Yet, X’s partial obedience here (selectively suspending accounts) hints at a tightrope walk: bowing just enough to keep operating in Türkiye while dodging a complete shutdown that could alienate its user base. For Turks, it’s a bitter pill: a platform they’ve leaned on as a lifeline for free expression now feels like an unreliable ally.

X’s defiant stand: a free speech facade?

Then came the curveball. Posts on X from users like @botella_roberto lit up feeds with news that X had rejected a broader Turkish demand to suspend ‘over 700 accounts,’ calling it ‘illegal’ and doubling down with a statement: ‘X will always defend freedom of speech.’ Such a stance paints X as a guardian of expression, a digital David slinging stones at an authoritarian Goliath.

Either way, one theory, whispered across X posts, is that X faced an ultimatum: suspend the critical accounts or risk a nationwide ban, a fate Twitter suffered in 2014

By complying with a partial measure, X might be playing a calculated game: preserving its Turkish foothold while burnishing its free-speech credibility globally. Musk, after all, has built X’s brand on unfiltered discourse, a stark pivot from Twitter’s pre-2022 moderation-heavy days. Yet, this defiance rings hollow to some. Amnesty International’s Türkiye researcher noted that the suspended accounts (often young activists) were the very voices X claims to champion.

Freedom of speech: a cultural tug-of-war

This saga isn’t just about X or Türkiye; it is an example reflecting the global tussle over what ‘freedom of speech’ means in 2025. In some countries, it is enshrined in laws and fiercely debated on platforms like X, where Musk’s ‘maximally helpful’ ethos thrives. In others, it’s a fragile thread woven into cultural fabrics that prizes collective stability over individual outcry. In Türkiye, the government frames dissent as a threat to national unity, a stance rooted in decades of political upheaval—think coups in 1960 and 1980. Consequently, protesters saw X as a megaphone to challenge that narrative, but when the platform suspended some of their accounts, it was as if the rug had been yanked out from under their feet, reinforcing an infamous sociocultural norm: speak too loud and you’ll be hushed.

Posts on X echo a split sentiment: some laud X for resisting some of the government’s requests, while others decry its compliance as a betrayal. This duality brings us to the conclusion that digital platforms aren’t neutral arbiters in free cyberspace but chameleons, adapting to local laws while trying to project a universal image.

Content policy: the invisible hand

X’s content policy, or lack thereof, adds another layer to this sociocultural dispute. Unlike Meta or YouTube, which lean on thick rulebooks, X under Musk has slashed moderation, betting on user-driven truth over top-down control. Its 2024 transparency report, cited in X posts, shows a global takedown compliance rate of 80%, but Türkiye’s 86% suggests a higher deference to Ankara’s demands. Why? Reuters points to Türkiye’s 2020 social media law, which mandates that platforms appoint local representatives to comply with takedowns or face bandwidth cuts and fines. X’s Istanbul office opened in 2023, signals its intent to play on Turkish ground, but the alleged refusal of government requests shows a line in the sand: comply, but not blindly.

This policy controversy isn’t unique to Türkiye. In Brazil, X faced a 2024 ban over misinformation, only to backtrack after appointing a local representative. In India, X sues Modi’s government over content removal in the new India censorship fight. In the US, X fights court battles to protect user speech. In Türkiye, it bows (partly) to avoid exile. Each case underscores a sociocultural truth: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people.

Conclusions

As the protests simmer and X navigates Türkiye’s demands, the world watches a sociocultural experiment unfold. Will X double down on defiance, risking a ban that could cost 20 million Turkish users (per 2024 Statista data)? Or will it bend further, cementing its role as a compliant guest in Ankara’s house? The answer could shape future digital dissents and the global blueprint for free speech online. For now, it is a standoff: X holds a megaphone in one hand, a gag in the other, while protesters shout into the fray.

South Korean court reinstates Han Duck-soo as acting president

Prime Minister Han Duck-soo has been reinstated as South Korea’s acting president after the Constitutional Court struck down his impeachment in a seven-to-one ruling.

Han, who briefly held the position before being suspended in December, pledged to stabilise the country and prioritise national interests amid rising tensions over US trade policies.

The court’s decision returns Han to power during a time of heightened political instability, sparked by President Yoon Suk Yeol’s controversial declaration of martial law last year.

Yoon’s actions led to mass protests and a wave of impeachments, resignations, and criminal charges across the political spectrum.

While Yoon awaits a separate ruling and trial over charges of leading an insurrection, Han expressed gratitude to the court and vowed to put an end to ‘extreme confrontation in politics.’

As one of South Korea’s most experienced officials, Han’s return is seen as a move towards continuity in governance. He has served under five presidents from both major parties and is regarded as a figure capable of bridging political divides.

Despite opposition criticism that he failed to prevent Yoon’s martial law move, Han denied any wrongdoing and has committed to guiding South Korea through external economic challenges, especially those posed by the United States.

The court’s pending decision on President Yoon’s fate remains a focal point of national attention. Lee Jae-myung, leader of the opposition Democratic Party and a potential successor, has urged the court to act swiftly to end the uncertainty.

With rallies continuing across the country both in favour of and against Yoon, the outcome could trigger a snap election within 60 days if the president is removed.

For more information on these topics, visit diplomacy.edu.

Musk’s X wins court motion to remove judge in German election data case

Elon Musk-owned social media platform X has succeeded in removing a judge from a German court case concerning demands for real-time election data.

The case, brought by activist groups Democracy Reporting International and the Society for Civil Rights, aimed to secure immediate access to data from the February 23 German election to monitor misinformation.

Although a Berlin court initially supported the activists’ request, X filed a motion arguing the judge had shown bias by interacting with the plaintiffs’ social media posts. The court approved the motion, though similar claims against two other judges were dismissed.

The ruling means that the activists will not receive the requested data within their critical timeframe. A hearing on the matter is set for February 27, but any ruling will come too late to influence their election monitoring efforts in Germany.

However, the decision could establish an important precedent for future transparency cases involving social media platforms. The activists had argued that while some election data is technically accessible, it is not realistically obtainable without direct access from X.

X has also announced plans to sue the German government over what it calls excessive user data requests, claiming these demands violate privacy and freedom of expression.

The German digital affairs ministry acknowledged X’s public statements but confirmed that no formal lawsuits had been filed yet. The escalating legal dispute highlights growing tensions between Musk and German authorities, particularly as the country prepares for key elections amid concerns over misinformation.

For more information on these topics, visit diplomacy.edu.