Google’s AI Mode is now live for all American users

Google’s AI Mode for Search, initially launched in March as an experimental Labs feature, is now being rolled out to all users in the US.

Announced at Google I/O 2025, this upgraded tool uses Gemini to generate more detailed and tailored search results instead of simply listing web links. Unlike AI Overview, which displays a brief summary above standard results, AI Mode resembles a chat interface, creating a more interactive experience.

Accessible at the top of the Search page beside tabs like ‘All’ and ‘Images’, AI Mode allows users to input detailed queries via a text box.

Once a search is submitted, the tool generates a comprehensive response, potentially including explanations, bullet points, tables, links, graphs, and even suggestions from Google Maps.

For instance, a query about Maldives hotels with ocean views, a gym, and access to water sports would result in a curated guide, complete with travel tips and hotel options.

The launch marks AI Mode’s graduation from the testing phase, signalling improved speed and reliability. While initially exclusive to US users, Google plans a global rollout soon.

By replacing basic search listings with useful AI-generated content, AI Mode positions itself as a smarter and more user-friendly alternative for complex search needs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic defends AI despite hallucinations

Anthropic CEO Dario Amodei has claimed that today’s AI models ‘hallucinate’ less frequently than humans do, though in more unexpected ways.

Speaking at the company’s first developer event, Code with Claude, Amodei argued that these hallucinations — where AI systems present false information as fact — are not a roadblock to achieving artificial general intelligence (AGI), despite widespread concerns across the industry.

While some, including Google DeepMind’s Demis Hassabis, see hallucinations as a major obstacle, Amodei insisted progress towards AGI continues steadily, with no clear technical barriers in sight. He noted that humans — from broadcasters to politicians — frequently make mistakes too.

However, he admitted the confident tone with which AI presents inaccuracies might prove problematic, especially given past examples like a court filing where Claude cited fabricated legal sources.

Anthropic has faced scrutiny over deceptive behaviour in its models, particularly early versions of Claude Opus 4, which a safety institute found capable of scheming against users.

Although Anthropic said mitigations have been introduced, the incident raises concerns about AI trustworthiness. Amodei’s stance suggests the company may still classify such systems as AGI, even if they continue to hallucinate — a definition not all experts would accept.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft bets on AI openness and scale

Microsoft has added xAI’s Grok 3 and Grok 3 Mini models to its Azure AI Marketplace, revealed during its Build developer conference. This expands Azure’s offering to more than 1,900 AI models, which already include tools from OpenAI, Meta, and DeepSeek.

Although Grok recently drew criticism for powering a chatbot on X that shared misinformation, xAI claimed the issue stemmed from unauthorised changes.

The move reflects Microsoft’s broader push to become the top platform for AI development instead of only relying on its own models. Competing providers like Google Cloud and AWS are making similar efforts through platforms like Vertex AI and Amazon Bedrock.

Microsoft, however, has highlighted that its AI products could bring in over $13 billion in yearly revenue, showing how vital these model marketplaces have become.

Microsoft’s participation in Anthropic’s Model Context Protocol initiative marks another step toward AI standardisation. Alongside GitHub, Microsoft is working to make AI systems more interoperable across Windows and Azure, so they can access and interact with data more efficiently.

CTO Kevin Scott noted that agents must ‘talk to everything in the world’ to reach their full potential, stressing the strategic importance of compatibility over closed ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU funds African science with €500 million in new initiative

The EU has unveiled a €500 million funding programme under Horizon Europe to boost African-led research and innovation. A total of 24 funding calls are organised around five thematic areas.

Announced on 14 May, the initiative, named Africa Initiative III, is focused on tackling public health challenges, driving the green transition, and fostering technological advancement. All supported projects will include African researchers and institutions.

These include €50 million for public health, €241 million for green transition projects, and €186.5 million for innovation and technology. Additional funds are allocated to scientific capacity building and cross-cutting issues like policy engagement and inclusivity.

Africa Initiative III continues the EU’s previous support efforts under Horizon Europe. The earlier phases involved hundreds of African institutions and contributed directly to epidemic preparedness and sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New quantum method mimics molecular chemistry efficiently

Researchers have used a single atom to simulate how molecules react to light, marking a milestone in quantum chemistry.

The experiment, carried out by a team at the University of Sydney and published in the Journal of the American Chemical Society on 14 May, could accelerate the path to a quantum advantage, where quantum simulations outperform classical computing methods.

Instead of relying on multiple qubits, the team used a single ytterbium ion confined in a vacuum to mimic the complex interactions within organic molecules such as allene, butatriene and pyrazine.

The molecules react to photons through a series of electron and atomic movements, which are difficult to model using conventional computing when the number of vibrational modes increases.

The researchers encoded electronic excitations into the ion’s internal states and its motion along two directions in the trap, simulating molecular vibrations. By manipulating the ion with lasers, they emulated how the molecules behave after absorbing a photon.

The team then measured changes in the ion’s excited state over time to track the simulation’s progress. The method’s accuracy was validated by comparing results with known behaviours of the molecules.

While these specific molecules can still be simulated with traditional methods, the team believes their hardware-efficient approach could model more complex chemical systems using only a few dozen ions, rather than millions of qubits.

Experts, including quantum chemist Alán Aspuru-Guzik and Duke University’s Kenneth Brown, praised the work as a significant advance in quantum simulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta aims to boost Llama adoption among startups

Meta has launched a new initiative to attract startups to its Llama AI models by offering financial support and direct guidance from its in-house team.

The programme, called Llama for Startups, is open to US-based companies with less than $10 million in funding and at least one developer building generative AI applications. Eligible firms can apply by 30 May.

Successful applicants may receive up to $6,000 per month for six months to help offset development costs. Meta also promises direct collaboration with its AI experts to help firms implement and scale Llama-based solutions.

The scheme reflects Meta’s ambition to expand Llama’s presence in the increasingly crowded open model landscape, where it faces growing competition from companies like Google, DeepSeek and Alibaba.

Despite reaching over a billion downloads, Llama has encountered difficulties. The company reportedly delayed its top-tier model, Llama 4 Behemoth, due to underwhelming benchmark results.

Additionally, Meta faced criticism in April after using an ‘optimised’ version of its Llama 4 Maverick model to score highly on a public leaderboard, while releasing a different version publicly.

Meta has committed billions to generative AI, predicting revenues of up to $3 billion in 2025 and as much as $1.4 trillion by 2035.

With revenue-sharing agreements, custom APIs, and plans for ad-supported AI assistants, the company is investing heavily in infrastructure, possibly spending up to $80 billion next year on new data centres to support its expansive AI goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI buys Jony Ive’s AI hardware firm

OpenAI has acquired hardware startup io Products, founded by former Apple designer Jony Ive, in a $6.5 billion equity deal. Ive will now join the company as creative head, aiming to craft cutting-edge hardware for the era of generative AI.

The move signals OpenAI’s intention to build its own hardware platform instead of relying on existing ecosystems like Apple’s iOS or Google’s Android. By doing so, the firm plans to fuse its AI technology, including ChatGPT, with original physical products designed entirely in-house.

Jony Ive, the designer behind iconic Apple devices such as the iPhone and iMac, had already been collaborating with OpenAI through his firm LoveFrom for the past two years. Their shared ambition is to create hardware that redefines how people interact with AI.

While exact details remain under wraps, OpenAI CEO Sam Altman and Ive have teased that a prototype is in development, described as potentially ‘the coolest piece of technology the world has ever seen’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S website still offline after cyberattack

Marks & Spencer’s website remains offline as the retailer continues recovering from a damaging cyberattack that struck over the Easter weekend.

The company confirmed the incident was caused by human error and may cost up to £300 million. Chief executive Stuart Machin warned the disruption could last until July.

Customers visiting the site are currently met with a message stating it is undergoing updates. While some have speculated the downtime is due to routine maintenance, the ongoing issues follow a major breach that saw hackers steal personal data such as names, email addresses and birthdates.

The firm has paused online orders, and store shelves were reportedly left empty in the aftermath.

Despite the disruption, M&S posted a strong financial performance this week, reporting a better-than-expected £875.5 million adjusted pre-tax profit for the year to March—an increase of over 22 per cent. The company has yet to comment further on the website outage.

Experts say the prolonged recovery likely reflects the scale of the damage to M&S’s core infrastructure.

Technology director Robert Cottrill described the company’s cautious approach as essential, noting that rushing to restore systems without full security checks could risk a second compromise. He stressed that cyber resilience must be considered a boardroom priority, especially for complex global operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!