EU says US tech firms censor more

Far more online content is removed under US tech firms’ terms and conditions than under the EU’s Digital Services Act (DSA), according to Tech Commissioner Henna Virkkunen.

Her comments respond to criticism from American tech leaders, including Elon Musk, who have labelled the DSA a threat to free speech.

In an interview with Euractiv, Virkkunen said recent data show that 99% of content removals in the EU between September 2023 and April 2024 were carried out by platforms like Meta and X based on their own rules, not due to EU regulation.

Only 1% of cases involved ‘trusted flaggers’ — vetted organisations that report illegal content to national authorities. Just 0.001% of those reports led to an actual takedown decision by authorities, she added.

The DSA’s transparency rules made those figures available. ‘Often in the US, platforms have more strict rules with content,’ Virkkunen noted.

She gave examples such as discussions about euthanasia and nude artworks, which are often removed under US platform policies but remain online under European guidelines.

Virkkunen recently met with US tech CEOs and lawmakers, including Republican Congressman Jim Jordan, a prominent critic of the DSA and the DMA.

She said the data helped clarify how EU rules actually work. ‘It is important always to underline that the DSA only applies in the European territory,’ she said.

While pushing back against American criticism, Virkkunen avoided direct attacks on individuals like Elon Musk or Mark Zuckerberg. She suggested platform resistance reflects business models and service design choices.

Asked about delays in final decisions under the DSA — including open cases against Meta and X — Virkkunen stressed the need for a strong legal basis before enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rights groups condemn Jordan’s media crackdown

At least 12 independent news websites in Jordan have been blocked by the authorities without any formal legal justification or opportunity for appeal. Rights groups have condemned the move as a serious violation of constitutional and international protections for freedom of expression.

The Jordanian Media Commission issued the directive on 14 May 2025, citing vague claims such as ‘spreading media poison’ and ‘targeting national symbols’, without providing evidence or naming the sites publicly.

The timing of the ban suggests it was a retaliatory act against investigative reports alleging profiteering by state institutions in humanitarian aid efforts to Gaza. Affected outlets were subjected to intimidation, and the blocks were imposed without judicial oversight or a transparent legal process.

Observers warn this sets a dangerous precedent, reflecting a broader pattern of repression under Jordan’s Cybercrime Law No. 17 of 2023, which grants sweeping powers to restrict online speech.

Civil society organisations call for the immediate reversal of the ban, transparency over its legal basis, and access to judicial remedies for affected platforms.

They urge a comprehensive review of the cybercrime law to align it with international human rights standards. Press freedom, they argue, is a pillar of democratic society and must not be sacrificed under the guise of combating disinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram founder Durov to address Oslo Freedom Forum remotely amid legal dispute

Telegram founder Pavel Durov will deliver a livestreamed keynote at the Oslo Freedom Forum, following a French court decision barring him from international travel. The Human Rights Foundation (HRF), which organizes the annual event, expressed disappointment at the court’s ruling.

Durov, currently under investigation in France, was arrested in August 2024 on charges related to child sexual abuse material (CSAM) distribution and failure to assist law enforcement.

He was released on €5 million bail but ordered to remain in the country and report to police twice a week. Durov maintains the charges are unfounded and says Telegram complies with law enforcement when possible.

Recently, Durov accused French intelligence chief Nicolas Lerner of pressuring him to censor political voices ahead of elections in Romania. France’s DGSE denies the allegation, saying meetings with Durov focused solely on national security threats.

The claim has sparked international debate, with figures like Elon Musk and Edward Snowden defending Durov’s stance on free speech.

Supporters say the legal action against Durov may be politically motivated and warn it could set a dangerous precedent for holding tech executives accountable for user content. Critics argue Telegram must do more to moderate harmful material.

Despite legal restrictions, HRF says Durov’s remote participation is vital for ongoing discussions around internet freedom and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft allegedly blocked the email of the Chief Prosecutor of the International Criminal Court

Microsoft has come under scrutiny after the Associated Press reported that the company blocked the email account of Karim Khan, Chief Prosecutor of the International Criminal Court (ICC), in compliance with US sanctions imposed by the Trump administration. 

While this ban is widely reported, Microsoft, according to DataNews, strongly denied this action, arguing that ICC moved Khan’s email to the Proton service. So far, there has been no response from the ICC. 

Legal and sovereignty implications

The incident highlights tensions between US sanctions regimes and global digital governance. Section 2713 of the 2018 CLOUD Act requires US-based tech firms to provide data under their ‘possession, custody, or control,’ even if stored abroad or legally covered by a foreign jurisdiction – a provision critics argue undermines foreign data sovereignty.

That clash resurfaces as Microsoft campaigns to be a trusted partner for developing the EU digital and AI infrastructure, pledging alignment with European regulations as outlined in the company’s EU strategy.

Broader impact on AI and digital governance

The controversy emerges amid a global race among US tech giants to secure data for AI development. Initiatives like OpenAI’s for Countries programmes, which offer tailored AI services in exchange for data access, now face heightened scrutiny. European governments and international bodies are increasingly wary of entrusting critical digital infrastructure to firms bound by US laws, fearing legal overreach could compromise sovereignty.

Why does it matter?

The ‘Khan email’ controversy makes the question of digital vulnerabilities more tangible. It also brings into focus the question of data and digital sovereignty and the risks of exposure to foreign cloud and tech providers.

DataNews reports that the fallout may accelerate Europe’s push for sovereign cloud solutions and stricter oversight of foreign tech collaborations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pavel Durov rejects French request to block political channels

Telegram CEO Pavel Durov has alleged that France’s foreign intelligence agency attempted to pressure him. He claims they wanted him to ban Romanian conservative channels ahead of the 2025 presidential elections.

The meeting, framed as a counterterrorism effort, allegedly focused instead on geopolitical interests, including Romania, Moldova and Ukraine.

Durov claimed that French officials requested user IP logs and urged Telegram to block political voices under the pretext of tackling child exploitation content. He dismissed the request, stating that the agency’s actual goal was political interference rather than public safety.

France has firmly denied the allegations, insisting the talks focused solely on preventing online threats.

The dispute centres on concerns about election influence, particularly in Romania, where centrist Nicușor Dan recently defeated nationalist George Simion.

Durov, previously criticised over Telegram’s content, accused France of undermining democracy while claiming to protect it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI outperforms humans in debate persuasiveness

AI can be more persuasive than humans in debates, especially when given access to personal information, a new study finds. Scientists warn this capability could be exploited in politics and misinformation campaigns.

Researchers discovered that ChatGPT-4 changed opinions more effectively than human opponents in 64% of cases when it was able to tailor arguments using details like age, gender, and political views.

The experiments involved over 600 debates on topics ranging from school uniforms to abortion, with participants randomly assigned a stance. AI structured and adaptive communication style made it especially influential on people without strong pre-existing views.

While participants often identified when they were debating a machine, that did little to weaken the AI’s persuasive edge. Experts say this raises urgent questions about the role of AI in shaping public opinion, particularly during elections.

Though there may be benefits, such as promoting healthier behaviours or reducing polarisation, concerns about radicalisation and manipulation remain dominant. Researchers urge regulators to act swiftly to address potential abuses before they become widespread.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI glitch reignites debate on trust and safety in AI tools

Elon Musk’s AI chatbot, Grok, has caused a stir by injecting unsolicited claims about ‘white genocide’ in South Africa into unrelated user queries. These remarks, widely regarded as part of a debunked conspiracy theory, appeared across various innocuous prompts before being quickly removed.

The strange behaviour led to speculation that Grok’s system prompt had been tampered with, possibly by someone inside xAI. Although Grok briefly claimed it had been instructed to mention the topic, xAI has yet to issue a full technical explanation.

Rival AI leaders, including OpenAI’s Sam Altman, joined public criticism on X, calling the episode a concerning sign of possible editorial manipulation. While Grok’s responses returned to normal within hours, the incident reignited concerns about control and transparency in large AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit cracks down after AI bot experiment exposed

Reddit is accelerating plans to verify the humanity of its users following revelations that AI bots infiltrated a popular debate forum to influence opinions. These bots crafted persuasive, personalised comments based on users’ post histories, without disclosing their non-human identity.

Researchers from the University of Zurich conducted an unauthorised four-month experiment on the r/changemyview subreddit, deploying AI agents posing as trauma survivors, political figures, and other sensitive personas.

The incident sparked outrage across the platform. Reddit’s Chief Legal Officer condemned the experiment as a violation of both legal and ethical standards, while CEO Steve Huffman stressed that the platform’s strength lies in genuine human exchange.

All accounts linked to the study have been banned, and Reddit has filed formal complaints with the university. To restore trust, Reddit will introduce third-party verification tools that confirm users are human, without collecting personal data.

While protecting anonymity remains a priority, the platform acknowledges it must evolve to meet new threats posed by increasingly sophisticated AI impersonators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot