Google launches Gemini Live and Pro/Ultra AI tiers at I/O 2025

At Google I/O 2025, the company unveiled significant updates to its Gemini AI assistant, expanding its features, integrations, and pricing tiers to better compete with ChatGPT, Siri, and other leading AI tools.

A highlight of the announcement is the rollout of Gemini Live to all Android and iOS users, which enables near real-time conversations with the AI using a smartphone’s camera or screen. Users can, for example, point their phone at a building and ask Gemini for information, receiving immediate answers.

Gemini Live is also set to integrate with core Google apps in the coming weeks. Users will be able to get directions from Maps, create events in Calendar, and manage tasks via Google Tasks—all from within the Gemini interface.

Google also introduced new subscription tiers. Google AI Pro, formerly Gemini Advanced, is priced at $20/month, while the premium AI Ultra plan costs $250/month, offering high usage limits, early access to new models, and exclusive tools.

Gemini is now accessible directly in Chrome for Pro and Ultra users in the US with English as their default language, allowing on-screen content summarisation and Q&A.

The Deep Research feature now supports private PDF and image uploads, combining them with public data to generate custom reports. Integration with Gmail and Google Drive is coming soon.

Visual tools are also improving. Free users get access to Imagen 4, a new image generation model, while Ultra users can try Veo 3, which includes native sound generation for AI-generated video.

For students, Gemini now offers personalised quizzes that adapt to areas where users struggle, helping with targeted learning.

Gemini now serves over 400 million monthly users, as Google deepens its AI footprint across its platforms through seamless integration and real-time multimodal capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ascension faces fresh data breach fallout

A major cybersecurity breach has struck Ascension, one of the largest nonprofit healthcare systems in the US, exposing the sensitive information of over 430,000 patients.

The incident began in December 2024, when Ascension discovered that patient data had been compromised through a former business partner’s software flaw.

The indirect breach allowed cybercriminals to siphon off a wide range of personal, medical and financial details — including Social Security numbers, diagnosis codes, hospital admission records and insurance data.

The breach adds to growing concerns over the healthcare industry’s vulnerability to cyberattacks. In 2024 alone, 1,160 healthcare-related data breaches were reported, affecting 305 million records — a sharp rise from the previous year.

Many institutions still treat cybersecurity as an afterthought instead of a core responsibility, despite handling highly valuable and sensitive data.

Ascension itself has been targeted multiple times, including a ransomware attack in May 2024 that disrupted services at dozens of hospitals and affected nearly 5.6 million individuals.

Ascension has since filed notices with regulators and is offering two years of identity monitoring to those impacted. However, critics argue this response is inadequate and reflects a broader pattern of negligence across the sector.

The company has not named the third-party vendor responsible, but experts believe the incident may be tied to a larger ransomware campaign that exploited flaws in widely used file-transfer software.

Rather than treating such incidents as isolated, experts warn that these breaches highlight systemic flaws in healthcare’s digital infrastructure. As criminals grow more sophisticated and vendors remain vulnerable, patients bear the consequences.

Until healthcare providers prioritise cybersecurity instead of cutting corners, breaches like this are likely to become even more common — and more damaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Half of young people would prefer life without the internet

Nearly half of UK youths aged 16 to 21 say they would prefer to grow up without the internet, a new survey reveals. The British Standards Institution found that 68% feel worse after using social media and half would support a digital curfew past 10 p.m.

These findings come as the government considers app usage limits for platforms like TikTok and Instagram. The study also showed that many UK young people feel compelled to hide their online behaviour: 42% admitted lying to parents, and a similar number have fake or burner accounts.

More worryingly, 27% said they have shared their location with strangers, while others admitted pretending to be someone else entirely. Experts argue that digital curfews alone won’t reduce exposure to online harms without broader safeguards in place.

Campaigners and charities are calling for urgent legislation that puts children’s safety before tech profits. The Molly Rose Foundation stressed the danger of algorithms pushing harmful content, while the NSPCC urged a shift towards less addictive and safer online spaces.

The majority of young people surveyed want more protection online and clearer action from tech firms and policymakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Can AI replace therapists?

With mental health waitlists at record highs and many struggling to access affordable therapy, some are turning to AI chatbots for support.

Kelly, who waited months for NHS therapy, found solace in character.ai bots, describing them as always available, judgment-free companions. ‘It was like a cheerleader,’ she says, noting how bots helped her cope with anxiety and heartbreak.

But despite emotional benefits for some, AI chatbots are not without serious risks. Character.ai is facing a lawsuit from the mother of a 14-year-old who died by suicide after reportedly forming a harmful relationship with an AI character.

Other bots, like one from the National Eating Disorder Association, were shut down after giving dangerous advice.

Even so, demand is high. In April 2024 alone, 426,000 mental health referrals were made in England, and over a million people are still waiting for care. Apps like Wysa, used by 30 NHS services, aim to fill the gap by offering CBT-based self-help tools and crisis support.

Experts warn, however, that chatbots lack context, emotional intuition, and safeguarding. Professor Hamed Haddadi calls them ‘inexperienced therapists’ that may agree too easily or misunderstand users.

Ethicists like Dr Paula Boddington point to bias and cultural gaps in the AI training data. And privacy is a looming concern: ‘You’re not entirely sure how your data is being used,’ says psychologist Ian MacRae.

Still, users like Nicholas, who lives with autism and depression, say AI has helped when no one else was available. ‘It was so empathetic,’ he recalls, describing how Wysa comforted him during a night of crisis.

A Dartmouth study found AI users saw a 51% drop in depressive symptoms, but even its authors stress bots can’t replace human therapists. Most experts agree AI tools may serve as temporary relief or early intervention—but not as long-term substitutes.

As John, another user, puts it: ‘It’s a stopgap. When nothing else is there, you clutch at straws.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Darth Vader in Fortnite sparks union dispute

The use of an AI-generated Darth Vader voice in Fortnite has triggered a legal dispute between SAG-AFTRA and Epic Games.

According to GamesIndustry.biz, the actors’ union filed an unfair labor practice complaint, claiming it was not informed or consulted about the decision to use an artificial voice model in the game.

In Fortnite’s Galactic Battle season, players who defeat Darth Vader in Battle Royale can recruit him, triggering limited voice interactions powered by conversational AI.

The voice used stems from a licensing agreement with the estate of James Earl Jones, who retired in 2022 and granted rights for AI use of his iconic performance.

While Epic Games has confirmed it had legal permission to use Jones’ voice, SAG-AFTRA alleges the company bypassed union protocols by not informing them or offering the role to a human actor.

The outcome of this dispute could have broader implications for how AI voices are integrated into video games and media going forward, particularly regarding labor rights and union oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lords reject UK AI copyright bill again

The UK government has suffered a second defeat in the House of Lords over its Data (Use and Access) Bill, as peers once again backed a copyright-focused amendment aimed at protecting artists from AI content scraping.

Baroness Kidron, a filmmaker and digital rights advocate, led the charge, accusing ministers of listening to the ‘sweet whisperings of Silicon Valley’ and allowing tech firms to ‘redefine theft’ by exploiting copyrighted material without permission.

Her amendment would force AI companies to disclose their training data sources and obtain consent from rights holders.

The government had previously rejected this amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt ongoing consultations.

But Kidron’s position was strongly supported across party lines, with peers calling the current AI practices ‘burglary’ and warning of catastrophic damage to the UK’s creative sector.

High-profile artists like Sir Elton John, Paul McCartney, Annie Lennox, and Kate Bush have condemned the government’s stance, with Sir Elton branding ministers ‘losers’ and accusing them of enabling theft.

Peers from Labour, the Lib Dems, the Conservatives, and the crossbenches united to defend UK copyright law, calling the government’s actions a betrayal of the country’s leadership in intellectual property rights.

Labour’s Lord Brennan warned against a ‘double standard’ for AI firms, while Lord Berkeley insisted immediate action was needed to prevent long-term harm.

Technology Minister Baroness Jones countered that no country has resolved the AI-copyright dilemma and warned that the amendment would only create more regulatory confusion.

Nonetheless, peers voted overwhelmingly in favour of Kidron’s proposal—287 to 118—sending the bill back to the Commons with a strengthened demand for transparency and copyright safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!