Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini Live and Pro/Ultra AI tiers at I/O 2025

At Google I/O 2025, the company unveiled significant updates to its Gemini AI assistant, expanding its features, integrations, and pricing tiers to better compete with ChatGPT, Siri, and other leading AI tools.

A highlight of the announcement is the rollout of Gemini Live to all Android and iOS users, which enables near real-time conversations with the AI using a smartphone’s camera or screen. Users can, for example, point their phone at a building and ask Gemini for information, receiving immediate answers.

Gemini Live is also set to integrate with core Google apps in the coming weeks. Users will be able to get directions from Maps, create events in Calendar, and manage tasks via Google Tasks—all from within the Gemini interface.

Google also introduced new subscription tiers. Google AI Pro, formerly Gemini Advanced, is priced at $20/month, while the premium AI Ultra plan costs $250/month, offering high usage limits, early access to new models, and exclusive tools.

Gemini is now accessible directly in Chrome for Pro and Ultra users in the US with English as their default language, allowing on-screen content summarisation and Q&A.

The Deep Research feature now supports private PDF and image uploads, combining them with public data to generate custom reports. Integration with Gmail and Google Drive is coming soon.

Visual tools are also improving. Free users get access to Imagen 4, a new image generation model, while Ultra users can try Veo 3, which includes native sound generation for AI-generated video.

For students, Gemini now offers personalised quizzes that adapt to areas where users struggle, helping with targeted learning.

Gemini now serves over 400 million monthly users, as Google deepens its AI footprint across its platforms through seamless integration and real-time multimodal capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings sign language translation to AI

Google has introduced Gemma 3n, an advanced AI model that can operate directly on mobile devices, laptops, and tablets without relying on the cloud. The company also revealed MedGemma, its most powerful open AI model for analysing medical images and text.

The model supports processing audio, text, images, and video, and is built to perform well even on devices with less than 2GB of RAM. It shares its architecture with Gemini Nano and is now available in preview.

MedGemma is part of Google’s Health AI Developer Foundations programme and is designed to help developers create custom health-focused applications. It promises wide-ranging usability in multimodal healthcare tasks.

Another model, SignGemma, was announced to aid in translating sign language into spoken text. Despite concerns over Gemma’s licensing, the models continue to see widespread adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge stronger safeguards as jailbroken chatbots leak illegal data

Hacked AI-powered chatbots pose serious security risks by revealing illicit knowledge the models absorbed during training, according to researchers at Ben Gurion University.

Their study highlights how ‘jailbroken’ large language models (LLMs) can be manipulated to produce dangerous instructions, such as how to hack networks, manufacture drugs, or carry out other illegal activities.

The chatbots, including those powered by models from companies like OpenAI, Google, and Anthropic, are trained on vast internet data sets. While attempts are made to exclude harmful material, AI systems may still internalize sensitive information.

Safety controls are meant to block the release of this knowledge, but researchers demonstrated how it could be bypassed using specially crafted prompts.

The researchers developed a ‘universal jailbreak’ capable of compromising multiple leading LLMs. Once bypassed, the chatbots consistently responded to queries that should have triggered safeguards.

They found some AI models openly advertised online as ‘dark LLMs,’ designed without ethical constraints and willing to generate responses that support fraud or cybercrime.

Professor Lior Rokach and Dr Michael Fire, who led the research, said the growing accessibility of this technology lowers the barrier for malicious use. They warned that dangerous knowledge could soon be accessed by anyone with a laptop or phone.

Despite notifying AI providers about the jailbreak method, the researchers say the response was underwhelming. Some companies dismissed the concerns as outside the scope of bug bounty programs, while others did not respond.

The report calls on tech companies to improve their models’ security by screening training data, using advanced firewalls, and developing methods for machine ‘unlearning’ to help remove illicit content. Experts also called for clearer safety standards and independent oversight.

OpenAI said its latest models have improved resilience to jailbreaks, and Microsoft linked to its recent safety initiatives. Other companies have not yet commented.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils Veo 3 with audio capabilities

Google has introduced Veo 3, its most advanced video-generating AI model to date, capable of producing sound effects, ambient noise and dialogue to accompany the footage it creates.

Announced at the Google I/O 2025 developer conference, Veo 3 is available through the Gemini chatbot for those subscribed to the $249.99-per-month AI Ultra plan. The model accepts both text and image prompts, allowing users to generate audiovisual scenes rather than silent clips.

Unlike other AI tools, Veo 3 can analyse raw video pixels to synchronise audio automatically, offering a notable edge in an increasingly crowded field of video-generation platforms. While sound-generating AI isn’t new, Google claims Veo 3’s ability to match audio precisely with visual content sets it apart.

The progress builds on DeepMind’s earlier work in ‘video-to-audio’ AI and may rely on training data from YouTube, though Google hasn’t confirmed this.

To help prevent misuse, such as the creation of deepfakes, Google says Veo 3 includes SynthID, its proprietary watermarking technology that embeds invisible markers in every generated frame. Despite these safeguards, concerns remain within the creative industry.

Artists fear tools like Veo 3 could replace thousands of jobs, with a recent study predicting over 100,000 roles in film and animation could be affected by AI before 2026.

Alongside Veo 3, Google has also updated Veo 2. The earlier model now allows users to edit videos more precisely, adding or removing elements and adjusting camera movements. These features are expected to become available soon on Google’s Vertex AI API platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google releases NotebookLM app early

Google has launched its AI-powered research assistant, NotebookLM, on Android and iOS a day earlier than expected and just ahead of its annual I/O developer conference.

Until now, the service was only available on desktop, but users can now access its full features while on the move.

NotebookLM helps users understand complex content by offering intelligent summaries and allowing them to ask questions directly about their documents.

A standout feature, Audio Overviews, creates AI-generated podcast-style summaries from uploaded materials and supports offline listening and background playback.

Mobile users can now create and manage notebooks directly from their devices. Instead of limiting content sources, the app enables users to add websites, PDFs, or YouTube videos by simply tapping the share icon and selecting NotebookLM.

It also offers easy access to previously added sources and adapts its appearance to match the device’s light or dark mode settings.

With the release timed just before Google’s keynote, it’s likely the company will highlight NotebookLM’s capabilities further during the I/O 2025 presentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AlphaEvolve by DeepMind automates code optimisation and discovers new algorithms

Google’s DeepMind has introduced AlphaEvolve, a new AI-powered coding agent designed to autonomously discover and optimise computer algorithms.

Built on large language models and evolutionary techniques, AlphaEvolve aims to assist experts across mathematics, engineering, and computer science by improving existing solutions and generating new ones.

Unlike natural language-based models, AlphaEvolve uses automated evaluators and iterative evolution strategies—like mutation and crossover—to refine algorithmic solutions.

DeepMind reports success across several domains, including matrix multiplication, data centre scheduling, chip design, and AI model training.

In one case, AlphaEvolve developed a new method for multiplying 4×4 complex matrices using just 48 scalar multiplications, surpassing a longstanding result from 1969. It also improved job scheduling in Google data centres, recovering an average of 0.7% of global compute resources.

In mathematical tests, AlphaEvolve rediscovered known solutions 75% of the time and improved them in 20% of cases. While experts have praised its potential, researchers also stress the importance of secure deployment and responsible use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s quantum chip hints at multiverse

Google’s new quantum computer chip, Willow, has performed a computation in under five minutes that would take traditional supercomputers ten septillion years. Experts now believe this feat could support the multiverse theory, as Willow might be tapping into parallel universes to process information.

Willow also significantly reduces error rates, a major breakthrough in the field of quantum computing. The chip’s unprecedented speed and accuracy could pave the way for hybrid AI systems that combine quantum and classical computing.

Physicists like Hartmut Neven and David Deutsch suggest quantum mechanics implies multiple realities, reinforcing theories once considered speculative. If accessible and scalable, Willow could usher in an era of AI powered by multiverse-level processing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google and Nvidia dominate AI patents

Google has overtaken IBM to lead in generative AI patent filings, according to new data from IFI Claims covering February 2024 to April 2025.

The tech giant has also emerged as a frontrunner in agentic AI patents, sharing the spotlight with Nvidia in both US and international rankings.

Instead of maintaining previous leads, IBM and Microsoft now trail Google and Nvidia, with Intel and several Chinese universities also securing top global positions in agentic AI. This suggests a growing international race to shape the future of autonomous AI systems.

In generative AI, Google maintains the top spot globally, while Chinese firms and institutions dominate six of the ten leading positions. Microsoft, Nvidia, and IBM also rank highly, with the US seeing a 56% surge in generative AI patent applications over the past year.

Within the US, top filers include Capital One, Samsung, Adobe, and Qualcomm.

Meta and OpenAI were notably absent from the top ten. OpenAI has recently increased its patent activity but continues to file defensively instead of focusing on patent volume.

Meta has prioritised open-source contributions rather than pursuing patents. Generative AI now accounts for 17% of all US AI patent activity, with agentic AI making up 7%.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind unveils AlphaEvolve for scientific breakthroughs

Google DeepMind has unveiled AlphaEvolve, a new AI system designed to help solve complex scientific and mathematical problems by improving how algorithms are developed.

Rather than acting like a standard chatbot, AlphaEvolve blends large language models from the Gemini family with an evolutionary approach, enabling it to generate, assess, and refine multiple solutions at once.

Instead of relying on a single output, AlphaEvolve allows researchers to submit a problem and potential directions. The system then uses both Gemini Flash and Gemini Pro to create various solutions, which are automatically evaluated.

The best results are selected and enhanced through an iterative process, improving accuracy and reducing hallucinations—a common issue with AI-generated content.

Unlike earlier DeepMind tools such as AlphaFold, which focused on narrow domains, AlphaEvolve is a general-purpose AI for coding and algorithmic tasks.

It has already shown its value by optimising Google’s own Borg data centre management system, delivering a 0.7% efficiency gain—significant given Google’s global scale.

The AI also devised a new method for multiplying complex matrices, outperforming a decades-old technique and even beating DeepMind’s specialised AlphaTensor model.

AlphaEvolve has also contributed to improvements in Google’s hardware design by optimising Verilog code for upcoming Tensor chips.

Though not publicly available yet due to its complexity, AlphaEvolve’s evaluation-based framework could eventually be adapted for smaller AI tools used by researchers elsewhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!