Microsoft allegedly blocked the email of the Chief Prosecutor of the International Criminal Court

Microsoft has come under scrutiny after the Associated Press reported that the company blocked the email account of Karim Khan, Chief Prosecutor of the International Criminal Court (ICC), in compliance with US sanctions imposed by the Trump administration. 

While this ban is widely reported, Microsoft, according to DataNews, strongly denied this action, arguing that ICC moved Khan’s email to the Proton service. So far, there has been no response from the ICC. 

Legal and sovereignty implications

The incident highlights tensions between US sanctions regimes and global digital governance. Section 2713 of the 2018 CLOUD Act requires US-based tech firms to provide data under their ‘possession, custody, or control,’ even if stored abroad or legally covered by a foreign jurisdiction – a provision critics argue undermines foreign data sovereignty.

That clash resurfaces as Microsoft campaigns to be a trusted partner for developing the EU digital and AI infrastructure, pledging alignment with European regulations as outlined in the company’s EU strategy.

Broader impact on AI and digital governance

The controversy emerges amid a global race among US tech giants to secure data for AI development. Initiatives like OpenAI’s for Countries programmes, which offer tailored AI services in exchange for data access, now face heightened scrutiny. European governments and international bodies are increasingly wary of entrusting critical digital infrastructure to firms bound by US laws, fearing legal overreach could compromise sovereignty.

Why does it matter?

The ‘Khan email’ controversy makes the question of digital vulnerabilities more tangible. It also brings into focus the question of data and digital sovereignty and the risks of exposure to foreign cloud and tech providers.

DataNews reports that the fallout may accelerate Europe’s push for sovereign cloud solutions and stricter oversight of foreign tech collaborations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lords reject UK AI copyright bill again

The UK government has suffered a second defeat in the House of Lords over its Data (Use and Access) Bill, as peers once again backed a copyright-focused amendment aimed at protecting artists from AI content scraping.

Baroness Kidron, a filmmaker and digital rights advocate, led the charge, accusing ministers of listening to the ‘sweet whisperings of Silicon Valley’ and allowing tech firms to ‘redefine theft’ by exploiting copyrighted material without permission.

Her amendment would force AI companies to disclose their training data sources and obtain consent from rights holders.

The government had previously rejected this amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt ongoing consultations.

But Kidron’s position was strongly supported across party lines, with peers calling the current AI practices ‘burglary’ and warning of catastrophic damage to the UK’s creative sector.

High-profile artists like Sir Elton John, Paul McCartney, Annie Lennox, and Kate Bush have condemned the government’s stance, with Sir Elton branding ministers ‘losers’ and accusing them of enabling theft.

Peers from Labour, the Lib Dems, the Conservatives, and the crossbenches united to defend UK copyright law, calling the government’s actions a betrayal of the country’s leadership in intellectual property rights.

Labour’s Lord Brennan warned against a ‘double standard’ for AI firms, while Lord Berkeley insisted immediate action was needed to prevent long-term harm.

Technology Minister Baroness Jones countered that no country has resolved the AI-copyright dilemma and warned that the amendment would only create more regulatory confusion.

Nonetheless, peers voted overwhelmingly in favour of Kidron’s proposal—287 to 118—sending the bill back to the Commons with a strengthened demand for transparency and copyright safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Uber is ready for driverless taxis in the UK

Uber says it is fully prepared to launch driverless taxis in the UK, but the government has pushed back its timeline for approving fully autonomous vehicles.

The previous 2026 target has been shifted to the second half of 2027, despite rapid developments in self-driving technology already being trialled on British roads.

Currently, limited self-driving systems are legal so long as a human remains behind the wheel and responsible for the car.

Uber, which already runs robotaxis in the US and parts of Asia, is working with 18 tech firms—including UK-based Wayve—to expand the service. Wayve’s AI-driven vehicles were recently tested in central London, managing traffic, pedestrians and roadworks with no driver intervention.

Uber’s Andrew Macdonald said the technology is ready now, but regulatory support is still catching up. The government insists legislation will come in 2027 and is exploring short-term trials in the meantime.

Macdonald acknowledged safety concerns, noting incidents abroad, but argued autonomous vehicles could eventually prove safer than human drivers, based on early US data.

Beyond technology, the shift raises big questions around insurance, liability and jobs. The government sees a £42 billion industry with tens of thousands of new roles, but unions warn of social impacts for professional drivers.

Still, Uber sees a future where fewer people even bother to learn how to drive, because AI will do it for them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase hit by breach and SEC probe ahead of S&P 500 entry

Cryptocurrency exchange Coinbase has disclosed a potential financial impact of $180 million to $400 million following a cyberattack that compromised customer data, according to a regulatory filing on Thursday.

The company said it received an email from an unidentified threat actor on Sunday, claiming to possess internal documents and account data for a limited number of customers.

Although hackers gained access to personal information such as names, addresses, and email addresses, Coinbase confirmed that no login credentials or passwords were compromised.

Coinbase stated it would reimburse users who were deceived into transferring funds to the attackers. It also revealed that multiple contractors and support staff outside the US had provided information to the hackers. Those involved have been terminated, the company said.

In parallel, the US Securities and Exchange Commission (SEC) is reportedly investigating whether Coinbase previously misrepresented its verified user figures.

Two sources familiar with the matter told Reuters that the SEC inquiry is ongoing, though it does not focus on know-your-customer (KYC) compliance or Bank Secrecy Act obligations. Coinbase has denied any such investigation into its compliance practices.

The SEC declined to comment. Coinbase’s chief legal officer, Paul Grewal, characterised the probe as a continuation of a past investigation into a user metric the company stopped reporting over two years ago. He said Coinbase is cooperating with the SEC but believes the inquiry should be closed.

The news comes ahead of Coinbase’s upcoming addition to the S&P 500 index, potentially overshadowing what had been viewed as a major milestone for the industry. Shares fell 7.2% following the disclosure.

Coinbase has rejected a $20 million ransom demand from the attackers and is cooperating with law enforcement. It has also offered a $20 million reward for information leading to the identification of the hackers.

The firm is opening a new US-based support hub and taking further measures to strengthen its cybersecurity framework.

The cyberattack adds to broader concerns about digital asset platform vulnerabilities. In 2024, hacks have resulted in over $2.2 billion in stolen funds, according to Chainalysis. Bybit alone reported a $1.5 billion theft in February, the largest on record.

Coinbase is also facing a lawsuit filed in the Southern District of New York, alleging the company failed to protect personal data belonging to millions of current and former customers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan approves preemptive cyberdefence law

Japan’s parliament has passed a new law enabling active cyberdefence measures, allowing authorities to legally monitor communications data during peacetime and neutralise foreign servers if cyberattacks occur.

Instead of reacting only after incidents, this law lets the government take preventive steps to counter threats before they escalate.

Operators of vital infrastructure, such as electricity and railway companies, must now report cyber breaches directly to the government. The shift follows recent cyber incidents targeting banks and an airline, prompting Japan to put a full framework in place by 2027.

Although the law permits monitoring of IP addresses in communications crossing Japanese borders, it explicitly bans surveillance of domestic messages and their contents.

A new independent panel will authorise all monitoring and response actions beforehand, instead of leaving decisions solely to security agencies.

Police will handle initial countermeasures, while the Self-Defense Forces will act only when attacks are highly complex or planned. The law, revised to address opposition concerns, includes safeguards to ensure personal rights are protected and that government surveillance remains accountable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI hallucination at center of Anthropic copyright lawsuit

Anthropic, the AI company behind the Claude chatbot, has been ordered by a federal judge to respond to allegations that it submitted fabricated material—possibly generated by AI—as part of its defense in an ongoing copyright lawsuit.

The lawsuit, filed in October 2023 by music publishers Universal Music Group, Concord, and ABKCO, accuses Anthropic of unlawfully using lyrics from over 500 songs to train its chatbot. The publishers argue that Claude can produce copyrighted material when prompted, such as lyrics from Don McLean’s American Pie.

During a court hearing on Tuesday in California, the publishers’ attorney claimed that an Anthropic data scientist cited a nonexistent academic article from The American Statistician journal to support the argument that Claude rarely outputs copyrighted lyrics.

One of the article’s alleged authors later confirmed the paper was a ‘complete fabrication.’ The judge is now requiring Anthropic to formally address the incident in court.

The company, founded in 2021, is backed by major investors including Amazon, Google, and Sam Bankman-Fried, the disgraced crypto executive convicted of fraud in 2023.

The case marks a significant test of how AI companies handle copyrighted content, and how courts respond when AI-generated material is used in legal proceedings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!