Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI buys Jony Ive’s AI hardware firm

OpenAI has acquired hardware startup io Products, founded by former Apple designer Jony Ive, in a $6.5 billion equity deal. Ive will now join the company as creative head, aiming to craft cutting-edge hardware for the era of generative AI.

The move signals OpenAI’s intention to build its own hardware platform instead of relying on existing ecosystems like Apple’s iOS or Google’s Android. By doing so, the firm plans to fuse its AI technology, including ChatGPT, with original physical products designed entirely in-house.

Jony Ive, the designer behind iconic Apple devices such as the iPhone and iMac, had already been collaborating with OpenAI through his firm LoveFrom for the past two years. Their shared ambition is to create hardware that redefines how people interact with AI.

While exact details remain under wraps, OpenAI CEO Sam Altman and Ive have teased that a prototype is in development, described as potentially ‘the coolest piece of technology the world has ever seen’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S website still offline after cyberattack

Marks & Spencer’s website remains offline as the retailer continues recovering from a damaging cyberattack that struck over the Easter weekend.

The company confirmed the incident was caused by human error and may cost up to £300 million. Chief executive Stuart Machin warned the disruption could last until July.

Customers visiting the site are currently met with a message stating it is undergoing updates. While some have speculated the downtime is due to routine maintenance, the ongoing issues follow a major breach that saw hackers steal personal data such as names, email addresses and birthdates.

The firm has paused online orders, and store shelves were reportedly left empty in the aftermath.

Despite the disruption, M&S posted a strong financial performance this week, reporting a better-than-expected £875.5 million adjusted pre-tax profit for the year to March—an increase of over 22 per cent. The company has yet to comment further on the website outage.

Experts say the prolonged recovery likely reflects the scale of the damage to M&S’s core infrastructure.

Technology director Robert Cottrill described the company’s cautious approach as essential, noting that rushing to restore systems without full security checks could risk a second compromise. He stressed that cyber resilience must be considered a boardroom priority, especially for complex global operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and GitHub back Anthropic’s MCP

Microsoft and GitHub are officially joining the steering committee for MCP, a growing standard developed by Anthropic that connects AI models with data systems.

The announcement came during Microsoft’s Build 2025 event, highlighting a new phase of industry-wide backing for the protocol, which already has support from OpenAI and Google.

MCP allows developers to link AI systems with apps, business tools, and software environments using MCP servers and clients. Instead of AI models working in isolation, they can interact directly with sources like content repositories or app features to complete tasks and power tools like chatbots.

Microsoft plans to integrate MCP into its core platforms, including Azure and Windows 11. Soon, developers will be able to expose app functionalities, such as file access or Linux subsystems, as MCP servers, enabling AI models to use them securely.

GitHub and Microsoft are also contributing updates to the MCP standard itself, including a registry for server discovery and a new authorisation system to manage secure connections.

The broader goal is to let developers build smarter AI-powered applications by making it easier to plug into real-world data and tools, while maintaining strong control over access and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ascension faces fresh data breach fallout

A major cybersecurity breach has struck Ascension, one of the largest nonprofit healthcare systems in the US, exposing the sensitive information of over 430,000 patients.

The incident began in December 2024, when Ascension discovered that patient data had been compromised through a former business partner’s software flaw.

The indirect breach allowed cybercriminals to siphon off a wide range of personal, medical and financial details — including Social Security numbers, diagnosis codes, hospital admission records and insurance data.

The breach adds to growing concerns over the healthcare industry’s vulnerability to cyberattacks. In 2024 alone, 1,160 healthcare-related data breaches were reported, affecting 305 million records — a sharp rise from the previous year.

Many institutions still treat cybersecurity as an afterthought instead of a core responsibility, despite handling highly valuable and sensitive data.

Ascension itself has been targeted multiple times, including a ransomware attack in May 2024 that disrupted services at dozens of hospitals and affected nearly 5.6 million individuals.

Ascension has since filed notices with regulators and is offering two years of identity monitoring to those impacted. However, critics argue this response is inadequate and reflects a broader pattern of negligence across the sector.

The company has not named the third-party vendor responsible, but experts believe the incident may be tied to a larger ransomware campaign that exploited flaws in widely used file-transfer software.

Rather than treating such incidents as isolated, experts warn that these breaches highlight systemic flaws in healthcare’s digital infrastructure. As criminals grow more sophisticated and vendors remain vulnerable, patients bear the consequences.

Until healthcare providers prioritise cybersecurity instead of cutting corners, breaches like this are likely to become even more common — and more damaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils Veo 3 with audio capabilities

Google has introduced Veo 3, its most advanced video-generating AI model to date, capable of producing sound effects, ambient noise and dialogue to accompany the footage it creates.

Announced at the Google I/O 2025 developer conference, Veo 3 is available through the Gemini chatbot for those subscribed to the $249.99-per-month AI Ultra plan. The model accepts both text and image prompts, allowing users to generate audiovisual scenes rather than silent clips.

Unlike other AI tools, Veo 3 can analyse raw video pixels to synchronise audio automatically, offering a notable edge in an increasingly crowded field of video-generation platforms. While sound-generating AI isn’t new, Google claims Veo 3’s ability to match audio precisely with visual content sets it apart.

The progress builds on DeepMind’s earlier work in ‘video-to-audio’ AI and may rely on training data from YouTube, though Google hasn’t confirmed this.

To help prevent misuse, such as the creation of deepfakes, Google says Veo 3 includes SynthID, its proprietary watermarking technology that embeds invisible markers in every generated frame. Despite these safeguards, concerns remain within the creative industry.

Artists fear tools like Veo 3 could replace thousands of jobs, with a recent study predicting over 100,000 roles in film and animation could be affected by AI before 2026.

Alongside Veo 3, Google has also updated Veo 2. The earlier model now allows users to edit videos more precisely, adding or removing elements and adjusting camera movements. These features are expected to become available soon on Google’s Vertex AI API platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!