Ransomware gang leaks French government emails

A ransomware gang has published what it claims is sensitive data from multiple French organisations on a dark web forum.

The Stormous cartel, active since 2022, posted the dataset as a ‘comprehensive leak’ allegedly involving high-profile French government bodies.

However, researchers from Cybernews examined the information and found the data’s quality questionable, with outdated MD5 password hashes indicating it could be from older breaches.

Despite its age, the dataset could still be dangerous if reused credentials are involved. Threat actors may exploit the leaked emails for phishing campaigns by impersonating government agencies to extract more sensitive details.

Cybernews noted that even weak password hashes can eventually be cracked, especially when stronger security measures weren’t in place at the time of collection.

Among the affected organisations are Agence Française de Développement, the Paris Region’s Regional Health Agency, and the Court of Audit.

The number of exposed email addresses varies, with some institutions having only a handful leaked while others face hundreds. The French cybersecurity agency ANSSI has yet to comment.

Last year, France faced another massive exposure incident affecting 95 million citizen records, adding to concerns about ongoing cyber vulnerabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft gives Notepad AI writing powers

Microsoft has introduced a significant update to Notepad, version 11.2504.46.0, unveiling a new AI-powered ‘Write’ feature for Windows 11 users.

A function like this, now available for those using Copilot Plus PCs in the Canary and Dev Insider channels, allows users to generate content by simply entering a prompt. Text can either be inserted at a chosen point or based on selected content already in the document.

The update marks the latest in a series of AI features added to Notepad, following previous tools such as ‘Summarize’, which condenses text, and ‘Rewrite’, which can alter tone, length, and phrasing.

Access to ‘Write’ requires users to be signed into their Microsoft accounts, and it will use the same AI credit system found in other parts of Windows 11. Microsoft has yet to clarify whether these credits will eventually come at a cost for users not subscribed to Microsoft 365 or Copilot Pro.

Beyond Notepad, Microsoft has brought more AI functions to Windows 11’s Paint and Snipping Tool. Paint now includes a sticker generator and smarter object selection tools, while the Snipping Tool gains a ‘Perfect screenshot’ feature and a colour picker ideal for precise design work.

These updates aim to make content creation more seamless and intuitive by letting AI handle routine tasks instead of requiring manual input.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Mode is now live for all American users

Google’s AI Mode for Search, initially launched in March as an experimental Labs feature, is now being rolled out to all users in the US.

Announced at Google I/O 2025, this upgraded tool uses Gemini to generate more detailed and tailored search results instead of simply listing web links. Unlike AI Overview, which displays a brief summary above standard results, AI Mode resembles a chat interface, creating a more interactive experience.

Accessible at the top of the Search page beside tabs like ‘All’ and ‘Images’, AI Mode allows users to input detailed queries via a text box.

Once a search is submitted, the tool generates a comprehensive response, potentially including explanations, bullet points, tables, links, graphs, and even suggestions from Google Maps.

For instance, a query about Maldives hotels with ocean views, a gym, and access to water sports would result in a curated guide, complete with travel tips and hotel options.

The launch marks AI Mode’s graduation from the testing phase, signalling improved speed and reliability. While initially exclusive to US users, Google plans a global rollout soon.

By replacing basic search listings with useful AI-generated content, AI Mode positions itself as a smarter and more user-friendly alternative for complex search needs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic defends AI despite hallucinations

Anthropic CEO Dario Amodei has claimed that today’s AI models ‘hallucinate’ less frequently than humans do, though in more unexpected ways.

Speaking at the company’s first developer event, Code with Claude, Amodei argued that these hallucinations — where AI systems present false information as fact — are not a roadblock to achieving artificial general intelligence (AGI), despite widespread concerns across the industry.

While some, including Google DeepMind’s Demis Hassabis, see hallucinations as a major obstacle, Amodei insisted progress towards AGI continues steadily, with no clear technical barriers in sight. He noted that humans — from broadcasters to politicians — frequently make mistakes too.

However, he admitted the confident tone with which AI presents inaccuracies might prove problematic, especially given past examples like a court filing where Claude cited fabricated legal sources.

Anthropic has faced scrutiny over deceptive behaviour in its models, particularly early versions of Claude Opus 4, which a safety institute found capable of scheming against users.

Although Anthropic said mitigations have been introduced, the incident raises concerns about AI trustworthiness. Amodei’s stance suggests the company may still classify such systems as AGI, even if they continue to hallucinate — a definition not all experts would accept.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft bets on AI openness and scale

Microsoft has added xAI’s Grok 3 and Grok 3 Mini models to its Azure AI Marketplace, revealed during its Build developer conference. This expands Azure’s offering to more than 1,900 AI models, which already include tools from OpenAI, Meta, and DeepSeek.

Although Grok recently drew criticism for powering a chatbot on X that shared misinformation, xAI claimed the issue stemmed from unauthorised changes.

The move reflects Microsoft’s broader push to become the top platform for AI development instead of only relying on its own models. Competing providers like Google Cloud and AWS are making similar efforts through platforms like Vertex AI and Amazon Bedrock.

Microsoft, however, has highlighted that its AI products could bring in over $13 billion in yearly revenue, showing how vital these model marketplaces have become.

Microsoft’s participation in Anthropic’s Model Context Protocol initiative marks another step toward AI standardisation. Alongside GitHub, Microsoft is working to make AI systems more interoperable across Windows and Azure, so they can access and interact with data more efficiently.

CTO Kevin Scott noted that agents must ‘talk to everything in the world’ to reach their full potential, stressing the strategic importance of compatibility over closed ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ascension faces fresh data breach fallout

A major cybersecurity breach has struck Ascension, one of the largest nonprofit healthcare systems in the US, exposing the sensitive information of over 430,000 patients.

The incident began in December 2024, when Ascension discovered that patient data had been compromised through a former business partner’s software flaw.

The indirect breach allowed cybercriminals to siphon off a wide range of personal, medical and financial details — including Social Security numbers, diagnosis codes, hospital admission records and insurance data.

The breach adds to growing concerns over the healthcare industry’s vulnerability to cyberattacks. In 2024 alone, 1,160 healthcare-related data breaches were reported, affecting 305 million records — a sharp rise from the previous year.

Many institutions still treat cybersecurity as an afterthought instead of a core responsibility, despite handling highly valuable and sensitive data.

Ascension itself has been targeted multiple times, including a ransomware attack in May 2024 that disrupted services at dozens of hospitals and affected nearly 5.6 million individuals.

Ascension has since filed notices with regulators and is offering two years of identity monitoring to those impacted. However, critics argue this response is inadequate and reflects a broader pattern of negligence across the sector.

The company has not named the third-party vendor responsible, but experts believe the incident may be tied to a larger ransomware campaign that exploited flaws in widely used file-transfer software.

Rather than treating such incidents as isolated, experts warn that these breaches highlight systemic flaws in healthcare’s digital infrastructure. As criminals grow more sophisticated and vendors remain vulnerable, patients bear the consequences.

Until healthcare providers prioritise cybersecurity instead of cutting corners, breaches like this are likely to become even more common — and more damaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!