Bristol Data Week 2025 highlights AI For Good

Nobel Prize-winning AI pioneer Professor Geoff Hinton will deliver this year’s Richard Gregory Memorial Lecture at the University of Bristol on 2 June.

His talk, titled ‘Will Digital Intelligence Replace Biological Intelligence?’, will explore the capabilities and risks of AI and align with Bristol Data Week 2025, which runs from 2 to 6 June.

Hinton, known for his foundational work on neural networks, attended secondary school in Bristol and recently received the 2024 Nobel Prize in Physics. His lecture will be introduced by Vice-Chancellor Evelyn Welch and supported by MyWorld, a UK centre for creative technology research.

Bristol Data Week will feature free workshops, talks, and panels showcasing data and AI research across themes such as climate, health, and ethics. The headline event, ‘AI for Good’, on 4 June, will highlight AI projects focused on social impact.

Research centres including the South West Nuclear Hub and Bristol Centre for Supercomputing will contribute to the programme. Organisers aim to demonstrate how responsible AI can drive innovation and benefit communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan targets Facebook scam ads with new penalties

Taiwan’s Ministry of Digital Affairs plans to impose penalties on Meta for failing to enforce real-name verification on Facebook ads, according to Minister Huang Yen-nan. The move follows a recent meeting with law enforcement and growing concerns over scam-related losses.

A report from CommonWealth Magazine found Taiwanese victims lose NT$400 million (US$13 million) daily to scams, with 70% of losses tied to Facebook. Facebook has been the top scam-linked platform for two years, with over 60% of users reporting exposure to fraudulent content.

From April 2023 to September 2024, nearly 59,000 scam ads were found across Facebook and Google. One Facebook group in Chiayi County, with 410,000 members, was removed after being overwhelmed with daily fake job ads.

Huang identified Meta as the more problematic platform, saying 60% to 70% of financial scams stem from Facebook ads. Police have referred 15 cases to the ministry since May, but only two resulted in fines due to incomplete advertiser information.

Legislator Hung Mung-kai criticized delays in enforcement, noting that new anti-fraud laws took effect in February, but actions only began in May. Huang defended the process, stating platforms typically comply with takedown requests and real-name rules.

Under current law, scam ads must be removed within 24 hours of being reported. The ministry has used AI to detect and remove approximately 100,000 scam ads recently. Officials are now planning face-to-face meetings with Meta to demand stronger ad oversight.

Deputy Interior Minister Ma Shi-yuan called on platforms like Facebook and Line to improve ad screening, emphasizing that law enforcement alone cannot manage the volume of online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini Live and Pro/Ultra AI tiers at I/O 2025

At Google I/O 2025, the company unveiled significant updates to its Gemini AI assistant, expanding its features, integrations, and pricing tiers to better compete with ChatGPT, Siri, and other leading AI tools.

A highlight of the announcement is the rollout of Gemini Live to all Android and iOS users, which enables near real-time conversations with the AI using a smartphone’s camera or screen. Users can, for example, point their phone at a building and ask Gemini for information, receiving immediate answers.

Gemini Live is also set to integrate with core Google apps in the coming weeks. Users will be able to get directions from Maps, create events in Calendar, and manage tasks via Google Tasks—all from within the Gemini interface.

Google also introduced new subscription tiers. Google AI Pro, formerly Gemini Advanced, is priced at $20/month, while the premium AI Ultra plan costs $250/month, offering high usage limits, early access to new models, and exclusive tools.

Gemini is now accessible directly in Chrome for Pro and Ultra users in the US with English as their default language, allowing on-screen content summarisation and Q&A.

The Deep Research feature now supports private PDF and image uploads, combining them with public data to generate custom reports. Integration with Gmail and Google Drive is coming soon.

Visual tools are also improving. Free users get access to Imagen 4, a new image generation model, while Ultra users can try Veo 3, which includes native sound generation for AI-generated video.

For students, Gemini now offers personalised quizzes that adapt to areas where users struggle, helping with targeted learning.

Gemini now serves over 400 million monthly users, as Google deepens its AI footprint across its platforms through seamless integration and real-time multimodal capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI buys Jony Ive’s AI hardware firm

OpenAI has acquired hardware startup io Products, founded by former Apple designer Jony Ive, in a $6.5 billion equity deal. Ive will now join the company as creative head, aiming to craft cutting-edge hardware for the era of generative AI.

The move signals OpenAI’s intention to build its own hardware platform instead of relying on existing ecosystems like Apple’s iOS or Google’s Android. By doing so, the firm plans to fuse its AI technology, including ChatGPT, with original physical products designed entirely in-house.

Jony Ive, the designer behind iconic Apple devices such as the iPhone and iMac, had already been collaborating with OpenAI through his firm LoveFrom for the past two years. Their shared ambition is to create hardware that redefines how people interact with AI.

While exact details remain under wraps, OpenAI CEO Sam Altman and Ive have teased that a prototype is in development, described as potentially ‘the coolest piece of technology the world has ever seen’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware threat evolves with deceptive PDFs

Ransomware attacks fell by 31% in April 2025 compared to the previous month. Despite the overall decline, the retail sector remained a top target, with incidents at Marks & Spencer, Co-op, Harrods and Peter Green Chilled drawing national attention.

Retail remains vulnerable due to its public profile and potential for large-scale disruption. Experts warn the drop in figures does not reflect a weaker threat, as many attacks go unreported or are deliberately concealed.

Tactics are shifting, with some groups, like Babuk 2.0, faking claims to gain notoriety or extort victims. A rising threat in the ransomware landscape is the use of malicious PDF files, which now make up over a fifth of email-based malware.

These files, increasingly crafted using generative AI, are trusted more by users and harder to detect. Cybersecurity experts are urging firms to update defences and strengthen organisational security cultures to remain resilient.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings sign language translation to AI

Google has introduced Gemma 3n, an advanced AI model that can operate directly on mobile devices, laptops, and tablets without relying on the cloud. The company also revealed MedGemma, its most powerful open AI model for analysing medical images and text.

The model supports processing audio, text, images, and video, and is built to perform well even on devices with less than 2GB of RAM. It shares its architecture with Gemini Nano and is now available in preview.

MedGemma is part of Google’s Health AI Developer Foundations programme and is designed to help developers create custom health-focused applications. It promises wide-ranging usability in multimodal healthcare tasks.

Another model, SignGemma, was announced to aid in translating sign language into spoken text. Despite concerns over Gemma’s licensing, the models continue to see widespread adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge stronger safeguards as jailbroken chatbots leak illegal data

Hacked AI-powered chatbots pose serious security risks by revealing illicit knowledge the models absorbed during training, according to researchers at Ben Gurion University.

Their study highlights how ‘jailbroken’ large language models (LLMs) can be manipulated to produce dangerous instructions, such as how to hack networks, manufacture drugs, or carry out other illegal activities.

The chatbots, including those powered by models from companies like OpenAI, Google, and Anthropic, are trained on vast internet data sets. While attempts are made to exclude harmful material, AI systems may still internalize sensitive information.

Safety controls are meant to block the release of this knowledge, but researchers demonstrated how it could be bypassed using specially crafted prompts.

The researchers developed a ‘universal jailbreak’ capable of compromising multiple leading LLMs. Once bypassed, the chatbots consistently responded to queries that should have triggered safeguards.

They found some AI models openly advertised online as ‘dark LLMs,’ designed without ethical constraints and willing to generate responses that support fraud or cybercrime.

Professor Lior Rokach and Dr Michael Fire, who led the research, said the growing accessibility of this technology lowers the barrier for malicious use. They warned that dangerous knowledge could soon be accessed by anyone with a laptop or phone.

Despite notifying AI providers about the jailbreak method, the researchers say the response was underwhelming. Some companies dismissed the concerns as outside the scope of bug bounty programs, while others did not respond.

The report calls on tech companies to improve their models’ security by screening training data, using advanced firewalls, and developing methods for machine ‘unlearning’ to help remove illicit content. Experts also called for clearer safety standards and independent oversight.

OpenAI said its latest models have improved resilience to jailbreaks, and Microsoft linked to its recent safety initiatives. Other companies have not yet commented.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and GitHub back Anthropic’s MCP

Microsoft and GitHub are officially joining the steering committee for MCP, a growing standard developed by Anthropic that connects AI models with data systems.

The announcement came during Microsoft’s Build 2025 event, highlighting a new phase of industry-wide backing for the protocol, which already has support from OpenAI and Google.

MCP allows developers to link AI systems with apps, business tools, and software environments using MCP servers and clients. Instead of AI models working in isolation, they can interact directly with sources like content repositories or app features to complete tasks and power tools like chatbots.

Microsoft plans to integrate MCP into its core platforms, including Azure and Windows 11. Soon, developers will be able to expose app functionalities, such as file access or Linux subsystems, as MCP servers, enabling AI models to use them securely.

GitHub and Microsoft are also contributing updates to the MCP standard itself, including a registry for server discovery and a new authorisation system to manage secure connections.

The broader goal is to let developers build smarter AI-powered applications by making it easier to plug into real-world data and tools, while maintaining strong control over access and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!