Ascension faces fresh data breach fallout

A major cybersecurity breach has struck Ascension, one of the largest nonprofit healthcare systems in the US, exposing the sensitive information of over 430,000 patients.

The incident began in December 2024, when Ascension discovered that patient data had been compromised through a former business partner’s software flaw.

The indirect breach allowed cybercriminals to siphon off a wide range of personal, medical and financial details — including Social Security numbers, diagnosis codes, hospital admission records and insurance data.

The breach adds to growing concerns over the healthcare industry’s vulnerability to cyberattacks. In 2024 alone, 1,160 healthcare-related data breaches were reported, affecting 305 million records — a sharp rise from the previous year.

Many institutions still treat cybersecurity as an afterthought instead of a core responsibility, despite handling highly valuable and sensitive data.

Ascension itself has been targeted multiple times, including a ransomware attack in May 2024 that disrupted services at dozens of hospitals and affected nearly 5.6 million individuals.

Ascension has since filed notices with regulators and is offering two years of identity monitoring to those impacted. However, critics argue this response is inadequate and reflects a broader pattern of negligence across the sector.

The company has not named the third-party vendor responsible, but experts believe the incident may be tied to a larger ransomware campaign that exploited flaws in widely used file-transfer software.

Rather than treating such incidents as isolated, experts warn that these breaches highlight systemic flaws in healthcare’s digital infrastructure. As criminals grow more sophisticated and vendors remain vulnerable, patients bear the consequences.

Until healthcare providers prioritise cybersecurity instead of cutting corners, breaches like this are likely to become even more common — and more damaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Legal aid data breach affects UK applicants

The UK Ministry of Justice has confirmed a serious cyber-attack on its Legal Aid Agency, first detected on 23 April and revealed to be more extensive on 16 May. Investigators found that a wide range of personal details belonging to applicants dating back to 2010 were accessed.

The breach has prompted urgent security reviews and cooperation with the National Cyber Security Centre. Stolen information may include names, addresses, dates of birth, national ID numbers, criminal histories, employment records and financial data such as debts and contributions.

While the total number of affected individuals remains unconfirmed, publicly available figures suggest hundreds of thousands of applications across the last year alone. Victims have been urged to monitor for suspicious communications and to change passwords promptly.

UK Legal aid services have been taken offline as contingency measures are put in place to maintain support for vulnerable users. Jane Harbottle, CEO of the Legal Aid Agency, expressed regret over the incident and reassured applicants that efforts are underway to restore secure access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Criminals exploit weak mail security in new fraud surge

Check washing fraud is making a worrying comeback in the US, fuelled by both AI-powered identity theft and lax mail security. Criminals are intercepting posted cheques, erasing original details using chemicals, and rewriting them for higher amounts or different recipients.

The rise in such fraud, often unnoticed until the money is long gone, is prompting experts to warn the public to take immediate preventative steps. Reports show a sharp increase in cheque-related scams, with US financial institutions flagging over 665,000 suspicious cases in 2023 alone.

Organised crime groups are now blending traditional cheque theft with modern techniques, such as AI-generated identities and forged digital images. The fraudsters are also using mobile deposits, phishing emails, and business email compromise to trick individuals and companies into transferring funds.

For added protection, individuals and businesses are advised to invest in fraud monitoring, use cheques with security features, and report any suspicious activity without delay. With losses running into hundreds of millions, the growing threat of cheque washing shows no signs of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands expands espionage laws to include cyber activities

The Dutch government has adopted new legislation expanding the scope of its espionage laws to include digital espionage and other activities carried out on behalf of foreign states that may harm Dutch national interests. The updated law complements existing provisions that criminalise the disclosure of state secrets by adding penalties for leaking sensitive, but not classified, information and for conducting harmful activities linked to foreign entities.

Under the revised legal framework, penalties for computer-related offenses associated with espionage have been increased. Individuals found guilty of such offenses could face up to eight years in prison, or up to twelve years in particularly severe cases.

Netherlands Justice and Security Minister David van Weel stated that the measures aim to enhance national resilience against foreign threats.

In parallel, the government is moving forward with plans to implement vetting procedures for researchers and students seeking access to sensitive technologies at Dutch academic institutions. This follows growing concern over foreign interest in strategic research, particularly from China, as noted by Dutch intelligence services.

In recent assessments, Dutch authorities have reported both Chinese cyber activities targeting intellectual property and Russian state-linked attempts to disrupt national infrastructure. Incidents include reported efforts to infiltrate institutions based in The Hague, such as the International Criminal Court and the Organisation for the Prohibition of Chemical Weapons.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft brings Grok AI to Azure

Microsoft has become one of the first major cloud providers to offer managed access to Grok, the controversial AI model from Elon Musk’s xAI startup.

Now available through the Azure AI Foundry platform, both Grok 3 and Grok 3 mini will be billed by Microsoft and include the same service-level agreements as other Azure-hosted models.

Grok gained attention for its unfiltered and provocative tone, marketed by Musk as a more candid alternative to mainstream AI.

Unlike ChatGPT, it has been known to use vulgar language and provide responses on sensitive topics that other models typically avoid.

However, the AI has stirred criticism, particularly over troubling behaviour such as undressing women in photos and referencing conspiracy theories. Incidents of censorship and offensive content have raised concerns about its deployment on Musk’s platform X.

Instead of replicating that experience, Microsoft is offering a more controlled version of Grok within Azure. These versions include stricter content controls, enhanced data integration, and improved governance tools, distinguishing them from the models directly available through xAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Can AI replace therapists?

With mental health waitlists at record highs and many struggling to access affordable therapy, some are turning to AI chatbots for support.

Kelly, who waited months for NHS therapy, found solace in character.ai bots, describing them as always available, judgment-free companions. ‘It was like a cheerleader,’ she says, noting how bots helped her cope with anxiety and heartbreak.

But despite emotional benefits for some, AI chatbots are not without serious risks. Character.ai is facing a lawsuit from the mother of a 14-year-old who died by suicide after reportedly forming a harmful relationship with an AI character.

Other bots, like one from the National Eating Disorder Association, were shut down after giving dangerous advice.

Even so, demand is high. In April 2024 alone, 426,000 mental health referrals were made in England, and over a million people are still waiting for care. Apps like Wysa, used by 30 NHS services, aim to fill the gap by offering CBT-based self-help tools and crisis support.

Experts warn, however, that chatbots lack context, emotional intuition, and safeguarding. Professor Hamed Haddadi calls them ‘inexperienced therapists’ that may agree too easily or misunderstand users.

Ethicists like Dr Paula Boddington point to bias and cultural gaps in the AI training data. And privacy is a looming concern: ‘You’re not entirely sure how your data is being used,’ says psychologist Ian MacRae.

Still, users like Nicholas, who lives with autism and depression, say AI has helped when no one else was available. ‘It was so empathetic,’ he recalls, describing how Wysa comforted him during a night of crisis.

A Dartmouth study found AI users saw a 51% drop in depressive symptoms, but even its authors stress bots can’t replace human therapists. Most experts agree AI tools may serve as temporary relief or early intervention—but not as long-term substitutes.

As John, another user, puts it: ‘It’s a stopgap. When nothing else is there, you clutch at straws.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lords reject UK AI copyright bill again

The UK government has suffered a second defeat in the House of Lords over its Data (Use and Access) Bill, as peers once again backed a copyright-focused amendment aimed at protecting artists from AI content scraping.

Baroness Kidron, a filmmaker and digital rights advocate, led the charge, accusing ministers of listening to the ‘sweet whisperings of Silicon Valley’ and allowing tech firms to ‘redefine theft’ by exploiting copyrighted material without permission.

Her amendment would force AI companies to disclose their training data sources and obtain consent from rights holders.

The government had previously rejected this amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt ongoing consultations.

But Kidron’s position was strongly supported across party lines, with peers calling the current AI practices ‘burglary’ and warning of catastrophic damage to the UK’s creative sector.

High-profile artists like Sir Elton John, Paul McCartney, Annie Lennox, and Kate Bush have condemned the government’s stance, with Sir Elton branding ministers ‘losers’ and accusing them of enabling theft.

Peers from Labour, the Lib Dems, the Conservatives, and the crossbenches united to defend UK copyright law, calling the government’s actions a betrayal of the country’s leadership in intellectual property rights.

Labour’s Lord Brennan warned against a ‘double standard’ for AI firms, while Lord Berkeley insisted immediate action was needed to prevent long-term harm.

Technology Minister Baroness Jones countered that no country has resolved the AI-copyright dilemma and warned that the amendment would only create more regulatory confusion.

Nonetheless, peers voted overwhelmingly in favour of Kidron’s proposal—287 to 118—sending the bill back to the Commons with a strengthened demand for transparency and copyright safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase hit by multiple data breach lawsuits

Coinbase faces multiple lawsuits after revealing a data breach involving bribed support agents leaking user information. At least six lawsuits were filed between 15 and 16 May, accusing the exchange of poor security and mishandling the breach.

One lawsuit filed in New York claims Coinbase failed to protect sensitive data of millions, including names, addresses, phone numbers, and partial Social Security numbers.

The complaint says the exchange’s response was slow and inadequate, putting users at risk of identity theft and fraud.

Other lawsuits allege Coinbase did not spend enough on security and demand compensation and stronger protections. One case asks the court to order Coinbase to delete sensitive data and hire third-party auditors.

Coinbase declined to comment on the lawsuits but confirmed it refused a $20 million ransom. It plans to reimburse users who lost crypto to phishing scams related to the breach. The company also fired involved customer support agents.

Following the breach announcement, Coinbase shares fell 7% but rebounded quickly, closing higher on 16 May.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!