Microsoft allegedly blocked the email of the Chief Prosecutor of the International Criminal Court

Microsoft has come under scrutiny after the Associated Press reported that the company blocked the email account of Karim Khan, Chief Prosecutor of the International Criminal Court (ICC), in compliance with US sanctions imposed by the Trump administration. 

While this ban is widely reported, Microsoft, according to DataNews, strongly denied this action, arguing that ICC moved Khan’s email to the Proton service. So far, there has been no response from the ICC. 

Legal and sovereignty implications

The incident highlights tensions between US sanctions regimes and global digital governance. Section 2713 of the 2018 CLOUD Act requires US-based tech firms to provide data under their ‘possession, custody, or control,’ even if stored abroad or legally covered by a foreign jurisdiction – a provision critics argue undermines foreign data sovereignty.

That clash resurfaces as Microsoft campaigns to be a trusted partner for developing the EU digital and AI infrastructure, pledging alignment with European regulations as outlined in the company’s EU strategy.

Broader impact on AI and digital governance

The controversy emerges amid a global race among US tech giants to secure data for AI development. Initiatives like OpenAI’s for Countries programmes, which offer tailored AI services in exchange for data access, now face heightened scrutiny. European governments and international bodies are increasingly wary of entrusting critical digital infrastructure to firms bound by US laws, fearing legal overreach could compromise sovereignty.

Why does it matter?

The ‘Khan email’ controversy makes the question of digital vulnerabilities more tangible. It also brings into focus the question of data and digital sovereignty and the risks of exposure to foreign cloud and tech providers.

DataNews reports that the fallout may accelerate Europe’s push for sovereign cloud solutions and stricter oversight of foreign tech collaborations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pavel Durov rejects French request to block political channels

Telegram CEO Pavel Durov has alleged that France’s foreign intelligence agency attempted to pressure him. He claims they wanted him to ban Romanian conservative channels ahead of the 2025 presidential elections.

The meeting, framed as a counterterrorism effort, allegedly focused instead on geopolitical interests, including Romania, Moldova and Ukraine.

Durov claimed that French officials requested user IP logs and urged Telegram to block political voices under the pretext of tackling child exploitation content. He dismissed the request, stating that the agency’s actual goal was political interference rather than public safety.

France has firmly denied the allegations, insisting the talks focused solely on preventing online threats.

The dispute centres on concerns about election influence, particularly in Romania, where centrist Nicușor Dan recently defeated nationalist George Simion.

Durov, previously criticised over Telegram’s content, accused France of undermining democracy while claiming to protect it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI outperforms humans in debate persuasiveness

AI can be more persuasive than humans in debates, especially when given access to personal information, a new study finds. Scientists warn this capability could be exploited in politics and misinformation campaigns.

Researchers discovered that ChatGPT-4 changed opinions more effectively than human opponents in 64% of cases when it was able to tailor arguments using details like age, gender, and political views.

The experiments involved over 600 debates on topics ranging from school uniforms to abortion, with participants randomly assigned a stance. AI structured and adaptive communication style made it especially influential on people without strong pre-existing views.

While participants often identified when they were debating a machine, that did little to weaken the AI’s persuasive edge. Experts say this raises urgent questions about the role of AI in shaping public opinion, particularly during elections.

Though there may be benefits, such as promoting healthier behaviours or reducing polarisation, concerns about radicalisation and manipulation remain dominant. Researchers urge regulators to act swiftly to address potential abuses before they become widespread.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI glitch reignites debate on trust and safety in AI tools

Elon Musk’s AI chatbot, Grok, has caused a stir by injecting unsolicited claims about ‘white genocide’ in South Africa into unrelated user queries. These remarks, widely regarded as part of a debunked conspiracy theory, appeared across various innocuous prompts before being quickly removed.

The strange behaviour led to speculation that Grok’s system prompt had been tampered with, possibly by someone inside xAI. Although Grok briefly claimed it had been instructed to mention the topic, xAI has yet to issue a full technical explanation.

Rival AI leaders, including OpenAI’s Sam Altman, joined public criticism on X, calling the episode a concerning sign of possible editorial manipulation. While Grok’s responses returned to normal within hours, the incident reignited concerns about control and transparency in large AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit cracks down after AI bot experiment exposed

Reddit is accelerating plans to verify the humanity of its users following revelations that AI bots infiltrated a popular debate forum to influence opinions. These bots crafted persuasive, personalised comments based on users’ post histories, without disclosing their non-human identity.

Researchers from the University of Zurich conducted an unauthorised four-month experiment on the r/changemyview subreddit, deploying AI agents posing as trauma survivors, political figures, and other sensitive personas.

The incident sparked outrage across the platform. Reddit’s Chief Legal Officer condemned the experiment as a violation of both legal and ethical standards, while CEO Steve Huffman stressed that the platform’s strength lies in genuine human exchange.

All accounts linked to the study have been banned, and Reddit has filed formal complaints with the university. To restore trust, Reddit will introduce third-party verification tools that confirm users are human, without collecting personal data.

While protecting anonymity remains a priority, the platform acknowledges it must evolve to meet new threats posed by increasingly sophisticated AI impersonators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Musk denies OpenAI’s sabotage claims in court battle

Elon Musk has denied accusations from OpenAI that he is waging a campaign to undermine the startup, asserting that his legal actions are justified.

In a recent court filing, Musk’s lawyer dismissed claims that he used lawsuits, social media and press attacks to sabotage OpenAI, stating the real issue lies in the company’s alleged abandonment of its original nonprofit mission.

Musk’s attorney argued that this move fails to address concerns about OpenAI prioritising profit over its charitable goals, labelling the nonprofit structure an ‘inconvenience’ to CEO Sam Altman’s ambitions.

The US legal battle, set for trial in March 2026, stems from Musk’s accusations that OpenAI strayed from its founding principles after taking significant investment from Microsoft.

Meanwhile, OpenAI has countersued, claiming Musk is actively working to harm the company and its relationships with investors and customers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta blocks Muslim news page on Instagram in India at government request

Meta has restricted access to the prominent Instagram news account @Muslim for users in India at the request of the Indian government, the account’s founder said on Wednesday.

The move comes as hostilities intensify between India and Pakistan, following the deadliest military exchanges between the nuclear-armed neighbours in two decades.

Instagram users in India attempting to access the account, which has 6.7 million followers, were met with a message stating: ‘Account not available in India. This is because we complied with a legal request to restrict this content.’

Ameer Al-Khatahtbeh, founder and editor-in-chief of the page, described the restriction as censorship. ‘Meta has blocked the @Muslim account by legal request of the Indian government,’ he said in a statement. ‘This is censorship.’

Meta declined to comment, but directed AFP to a company page explaining its policy to comply with local laws when requested by governments.

The restriction follows a wave of similar bans on Pakistani public figures and media. Social media accounts of Pakistani cricketers, actors, and even former Prime Minister Imran Khan have also been blocked in India in recent days.

The situation unfolds amid escalating conflict in Kashmir, where India blamed Pakistan for a deadly attack on tourists earlier this month. In retaliation, India launched air strikes, prompting artillery exchanges along the contested border. At least 43 deaths have been reported, and Pakistan has vowed to respond.

@Muslim, one of the most-followed Muslim news sources on Instagram, is known for covering political and social justice issues.

Al-Khatahtbeh apologised to Indian followers and urged Meta to restore access, stating, ‘When platforms and countries try to silence media, it tells us we are doing our job in holding those in power accountable.’

The conflict has also seen a sharp rise in online misinformation, including deepfake videos and misleading content circulated across social media platforms. On Wednesday, US President Donald Trump called for both countries to halt the violence and offered assistance in mediating peace talks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK police struggle to contain online misinformation

Sir Andy Cooke has urged that Ofcom be granted stronger powers to swiftly remove harmful online posts, particularly misinformation linked to public unrest. He criticised delays in tackling false content during the 2024 riots, which allowed damaging narratives to spread unchecked.

The UK Online Safety Act, though recently passed, does not permit Ofcom to delete individual posts. Ofcom acknowledged the connection between online posts and the disorder but stated it is responsible for overseeing platforms’ safety systems, not moderating content directly.

Critics argue this leaves a gap in quickly stopping harmful material from spreading. The regulator has faced scrutiny for its perceived lack of action during last summer’s violence. Over 30 people have already been arrested for riot-related posts, with some receiving prison sentences.

Police forces were found to have limited capability to counter online misinformation, according to a new report. Sir Andy stressed the need for improved policing strategies and called for legal changes to deter inflammatory online behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!