AI reshapes UK social care but raises concerns

AI tools such as pain-detecting apps, night-time sensors, and even training robots are increasingly shaping social care in the UK.

Care homes now use the Painchek app to scan residents’ faces for pain indicators, while sensors like AllyCares monitor unusual activity, reducing preventable hospital visits.

Meanwhile, Oxford researchers have created a robot that helps train carers by mimicking patients’ reactions to pain. Families often adjust to the technology after seeing improvements in their loved ones’ care, but transparency and human oversight remain essential.

Despite the promise of these innovations, experts urge caution. Dr Caroline Green from the University of Oxford warns that AI must remain a support, not a replacement, and raises concerns about bias, data privacy, and potential overdependence on technology.

With the UK ageing population and staffing shortages straining social care, technology offers valuable assistance.

Specialists stress that investment in skilled human carers is crucial and the government has endorsed the role of AI in care but has yet to establish clear national policies guiding its ethical use

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Motorola reveals new Razr phones with AI power

Motorola has unveiled its latest Razr flip phones, packed with AI features from a mix of tech giants including Google, Microsoft, Meta and Perplexity. The Ultra, Plus and standard Razr models will debut on 15 May, with tools that suggest actions, summarise notifications and even respond to the user’s gaze.

Perplexity’s AI app will come preinstalled, marking a rare shift towards diversifying AI search tools on Android devices. Unlike rivals Apple and Samsung, Motorola’s strategy integrates multiple AI systems, avoiding reliance on a single provider.

Notably absent is OpenAI’s technology, with Motorola instead selecting partners based on their expertise in research, productivity and user engagement. Meta’s Llama model, Microsoft’s Copilot and Google’s Gemini assistant will all feature in the new phones.

The launch comes as Google faces legal scrutiny over its search engine dominance, raising questions about future control of the AI market. Despite trade tensions and potential tariff impacts, Motorola has kept prices steady, crediting its parent company Lenovo’s adaptable supply chain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands AdSense into AI amid rising regulatory pressure

Google has begun embedding advertisements within AI chatbot conversations as part of its AdSense network, strengthening its hold on the digital advertising market.

A company spokesperson confirmed that ‘AdSense for Search is available for websites that want to show relevant ads in their conversational AI experiences.’ The move comes as AI startups increasingly adopt advertising models to manage the steep costs of operating generative AI systems.

The introduction of ads into chatbot interactions continues Google’s two-decade-long strategy of extending its ad dominance to new technologies and user interfaces.

From revolutionising online ads with AdWords in 2000 to expanding into mobile and video, Google has consistently adapted its approach to maintain market leadership.

Integrating ads into AI chatbots marks the latest step, as the company responds to shifts in how users engage with digital content. This is especially vital as its core search ad business faces growing competition from AI-first platforms like Perplexity.

Google’s timing is also shaped by mounting regulatory pressure. In April 2025, a federal judge ruled the company had violated antitrust laws in key advertising markets, threatening its control of the digital ad ecosystem.

By establishing its ad presence in emerging AI markets, Google is seeking to secure new revenue streams and embed its standards before regulations catch up. This strategic pivot helps Google maintain relevance even as its traditional business faces legal challenges.

For AI startups, the introduction of advertising is driven by economic necessity. Generative AI systems incur high operational costs, making monetisation through ads increasingly attractive.

Partnering with Google offers immediate access to a global advertiser base and proven monetisation tools. Companies like iAsk and Liner have embraced the model, with Liner’s CEO describing their ads as an early version of Google’s own search ads.

As the AI market rapidly grows, projected to exceed $800 billion by 2030, establishing sustainable revenue models has become a priority.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini now allows up to 10 images per prompt on all platforms

Users of the Gemini app will now find it much easier to upload multiple images thanks to a new quality-of-life update.

Until now, only a single image could be added per prompt, with any new upload forcing the previous one to be removed. That restriction has been lifted, with support for up to 10 images now available across Android, iOS, and the web.

On mobile devices, users can select multiple photos directly through the system gallery or Gemini’s built-in Camera.

After capturing an image, the viewfinder remains accessible, allowing for additional photos to be taken and uploaded without leaving the prompt. Those who do not yet see the feature may need to force stop and restart the app for it to become available.

Web users visiting gemini.google.com will also benefit from this improvement, though uploads are limited to 10 images per session. Attempts to exceed this limit will result in a clear notification explaining that only 10 attachments can be uploaded at once.

The change applies to all current Gemini models, including 2.0 Flash, 2.5 Flash, and 2.5 Pro. In announcing the update, Gemini lead Josh Woodward encouraged users to share feedback, especially about common frustrations and other user experience issues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China boosts tourism with AI innovations

China’s tourism industry is undergoing rapid transformation as AI technologies become increasingly integrated into both national platforms and regional services. Instead of relying solely on traditional travel planning, tourists can now receive personalised itinerary suggestions in seconds.

Major platforms such as Trip.com use large AI models to assist users before, during and after their journeys—cutting decision-making time from 9 to 6.6 hours, according to Chairman Liang Jianzhang.

Several provinces and cities, including Guizhou and Shanghai, have launched their own AI tourism agents with distinct local features. Guizhou’s Huang Xiao Xi, a digital assistant in ethnic attire, offers tailored travel plans and food ordering options instantly.

Meanwhile, Shanghai’s Hu Xiao You connects tourists with real-time data about venues, traffic, and public amenities, learning from user feedback to improve recommendations over time.

Instead of overwhelming tourists with raw data, these AI agents streamline access to relevant information for a more efficient travel experience.

The rise of wearable AI guides and immersive tech, such as VR, AR, and 3D projections, has also transformed visits to museums and exhibitions. Visitors can now interact with holographic historical figures or animated ancient artworks, blending culture with innovation.

Rather than replacing traditional tourism, China is revitalising it through technology, aiming for improved digitisation, automation and smarter services that meet local development goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands developer tools with Windsurf purchase

OpenAI, the creator of ChatGPT, is reportedly set to acquire Windsurf, an AI-powered coding assistant formerly known as Codeium, for $3 billion, according to Bloomberg. If confirmed, it would be OpenAI’s largest acquisition to date.

The deal is still pending closure, but it follows recent investment talks Windsurf held with major backers such as General Catalyst and Kleiner Perkins, valuing the startup at the same amount.

Windsurf was last valued at $1.25 billion in 2024 after a $150 million funding round. Instead of raising more capital independently, the company now appears poised to join OpenAI, which is looking to bolster its suite of developer tools within ChatGPT.

The acquisition reflects OpenAI’s efforts to remain competitive in the fast-evolving AI coding landscape, following earlier purchases like Rockset and Multi last year.

OpenAI also revealed it would scale back a planned restructuring, abandoning its proposal to become a for-profit entity.

The decision comes amid growing scrutiny and legal challenges, including a high-profile lawsuit from Elon Musk, who accused the firm of drifting from its founding mission to develop AI that serves humanity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia opens new quantum research centre in Boston

Nvidia has unveiled plans to open the Nvidia Accelerated Quantum Research Center (NVAQC) in Boston, a facility set to bridge quantum computing and AI supercomputing.

Expected to begin operations later this year, the centre aims to accelerate the shift from experimental to practical quantum computing.

Rather than treating quantum hardware as a standalone endeavour, Nvidia intends to integrate it with existing AI-driven systems, believing this combination could unlock solutions to problems unsolvable by today’s machines.

Quantum computing—much like AI in its early stages—fits naturally with Nvidia’s core strength: parallel processing. Instead of continuing to rely on traditional serial computing, the company has long embraced parallelism through its GPU technology and CUDA software platform.

Nvidia’s success in transforming GPUs from graphics engines into tools for scientific and commercial applications began with its bold decision to make CUDA available across all its products, even at the cost of short-term profit margins.

Nvidia now sees quantum error correction as the next major challenge. Current quantum computers, operating with between fifty and one hundred qubits, face a high error rate due to environmental ‘noise.’

Achieving truly useful systems will require a million qubits or more, most of which will be used for error correction. Instead of depending solely on traditional methods, Nvidia plans to use AI to develop scalable solutions capable of correcting errors in real time.

The Boston-based NVAQC will serve as a testing ground for these innovations. Harvard, MIT, and quantum startups like Quantinuum and QuEra will collaborate with Nvidia’s quantum team to train AI models for error correction and test them using Nvidia’s top-tier supercomputers.

By doing so, Nvidia hopes to make quantum computing not just viable, but powerful and practical at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI app offers early support for parents of neurodivergent children

A new app called Hazel, developed by Bristol-based company Spicy Minds, offers parents a powerful tool to understand better and support their neurodivergent children while waiting for formal diagnoses. Using AI, the app runs a series of tests and then provides personalised strategies tailored to everyday challenges like school routines or holidays.

While it doesn’t replace a medical diagnosis, Hazel aims to fill a critical gap for families stuck in long waiting queues. Spicy Minds CEO Ben Cosh emphasised the need for quicker support, noting that many families wait years before receiving an autism diagnosis through the UK’s NHS.

‘Parents shouldn’t have to wait years to understand their child’s needs and get practical support,’ he said.

In Bristol alone, around 7,000 children are currently on waiting lists for an autism assessment, a number that continues to rise. Parents like Nicola Bennett, who waited five years for her son’s diagnosis, believe the app could be life-changing.

She praised Hazel for offering real-time guidance for managing sensory needs and daily planning—tools she wished she’d had much earlier. She also suggested integrating links to local support groups and services to make the app even more impactful.

By helping reduce stress and giving families a head start on understanding neurodiversity, Hazel represents a meaningful step toward more accessible, tech-driven support for parents navigating a complex and often delayed healthcare system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google’s Gemini AI completes Pokémon Blue with a little help

Google’s cutting-edge AI model, Gemini 2.5 Pro, has made headlines by completing the 1996 classic video game Pokémon Blue. While Google didn’t achieve the feat directly, it was orchestrated by Joel Z, an independent software engineer who created a livestream called Gemini Plays Pokémon.

Despite being unaffiliated with the tech giant, Joel’s project has drawn enthusiastic support from Google executives, including CEO Sundar Pichai, who celebrated the victory on social media. The challenge of beating a game like Pokémon Blue has become an informal benchmark for testing the reasoning and adaptability of large language models.

Earlier this year, AI company Anthropic revealed its Claude model was making strides in a similar title, Pokémon Red, but has yet to complete it. While comparisons between the two AIs are inevitable, Joel Z clarified that such evaluations are flawed due to differences in tools, data access, and gameplay frameworks.

To play the game, Gemini relied on a complex system called an ‘agent harness,’ which feeds the model visual and contextual information from the game and translates its decisions into gameplay actions. Joel admits to making occasional interventions to improve Gemini’s reasoning but insists these did not include cheats or explicit hints. Instead, his guidance was limited to refining the model’s problem-solving capabilities.

The project remains a work in progress, and Joel continues to enhance the framework behind Gemini’s gameplay. While it may not be an official benchmark for AI performance, the achievement is a playful demonstration of how far AI systems have come in tackling creative and unexpected challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple partners with Anthropic on AI coding tool

Apple is reportedly collaborating with Anthropic, a startup backed by Amazon, to develop a new AI-powered coding platform called ‘vibe coding’, according to Bloomberg.

The platform will use Anthropic’s Claude Sonnet model to write, edit, and test code on behalf of programmers, updating Apple’s existing Xcode software instead of launching an entirely separate tool.

‘Vibe coding’ refers to a growing trend in AI development where intelligent agents generate code autonomously instead of relying on manual programming. Apple is said to be testing the system internally for now, with no confirmed decision on whether it will become publicly available.

The move comes as tech firms race to lead in generative AI. While Apple previously introduced a similar tool, Swift Assist, it was never released to developers amid concerns from engineers about possible slowdowns in app creation.

Apple and Anthropic have not commented publicly on the reported collaboration.

With rivals like OpenAI pushing ahead—reportedly negotiating a $3 billion acquisition of coding assistant Windsurf—Apple is equipping its devices with more advanced chips and AI features, including ChatGPT integration, to compete in the rapidly evolving landscape instead of falling behind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!