OpenAI’s GPT-5 faces backlash for dull tone

OpenAI’s GPT-5 launched last week to immense anticipation, with CEO Sam Altman likening it to the iPhone’s Retina display moment. Marketing promised state-of-the-art performance across multiple domains, but early user reactions suggested a more incremental step than a revolution.

Many expected transformative leaps, yet improvements mainly were in cost, speed, and reliability. GPT-5’s switch system, which automatically routes queries to the most suitable model, was new, but its writing style drew criticism for being robotic and less nuanced.

Social media buzzed with memes mocking its mistakes, from miscounting letters in ‘blueberry’ to inventing US states. OpenAI quickly reinstated GPT-4 for users who missed its warmer tone, underlining a disconnect between expectations and delivery.

Expert reviews mirrored public sentiment. Gary Marcus called GPT-5 ‘overhyped and underwhelming’, while others saw modest benchmark gains. Coding was the standout, with the model topping leaderboards and producing functional, if simple, applications.

OpenAI emphasised GPT-5’s practical utility and reduced hallucinations, aiming for steadiness over spectacle. At the same time, it may not wow casual users, its coding abilities, enterprise appeal, and affordability position it to generate revenue in the fiercely competitive AI market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Seedbox.AI backs re-training AI models to boost Europe’s competitiveness

Germany’s Seedbox.AI is betting on re-training large language models (LLMs) rather than competing to build them from scratch. Co-founder Kai Kölsch believes this approach could give Europe a strategic edge in AI.

The Stuttgart-based startup adapts models like Google’s Gemini and Meta’s Llama for medical chatbots and real estate assistant applications. Kölsch compares Europe’s role in AI to improving a car already on the road, rather than reinventing the wheel.

A significant challenge, however, is access to specialised chips and computing power. The European Union is building an AI factory in Stuttgart, Germany, which Seedbox hopes will expand its capabilities in multilingual AI training.

Kölsch warns that splitting the planned EU gigafactories too widely will limit their impact. He also calls for delaying the AI Act, arguing that regulatory uncertainty discourages established companies from innovating.

Europe’s AI sector also struggles with limited venture capital compared to the United States. Kölsch notes that while the money exists, it is often channelled into safer investments abroad.

Talent shortages compound the problem. Seedbox is hiring, but top researchers are lured by Big Tech salaries, far above what European firms typically offer. Kölsch says talent inevitably follows capital, making EU funding reform essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches small AI model for mobiles and IoT

Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.

Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.

The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.

Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.

Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.

Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.

Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cohere secures $500m funding to expand secure enterprise AI

Cohere has secured $500 million in new funding, lifting its valuation to $6.8 billion and reinforcing its position as a secure, enterprise-grade AI specialist.

The Toronto-based firm, which develops large language models tailored for business use, attracted backing from AMD, Nvidia, Salesforce, and other investors.

Its flagship multilingual model, Aya 23, supports 23 languages and is designed to help companies adopt AI without the risks linked to open-source tools, reflecting growing demand for privacy-conscious, compliant solutions.

The round marks renewed support from chipmakers AMD and Nvidia, who had previously invested in the company.

Salesforce Ventures’ involvement hints at potential integration with enterprise software platforms, while other backers include Radical Ventures, Inovia Capital, PSP Investments, and the Healthcare of Ontario Pension Plan.

The company has also strengthened its leadership, appointing former Meta AI research head Joelle Pineau as Chief AI Scientist, Instagram co-founder Mike Krieger as Chief Product Officer, and ex-Uber executive Saroop Bharwani as Chief Technology Officer for Applied R&D.

Cohere intends to use the funding to advance agentic AI, systems capable of performing tasks autonomously, while focusing on security and ethical development.

With over $1.5 billion raised since its 2019 founding, the company targets adoption in regulated sectors such as healthcare and finance.

The investment comes amid a broader surge in AI spending, with industry leaders betting that secure, customisable AI will become essential for enterprise operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT AI creates antibiotics to tackle resistant bacteria

MIT researchers have used generative AI to design novel antibiotics targeting drug-resistant bacteria such as gonorrhea and MRSA. Laboratory tests show the compounds kill bacteria without harming human cells, marking a potential breakthrough in antibiotic development.

The AI system analysed over 36 million possible compounds, generating entirely new molecules with mechanisms that bypass existing resistance. Unlike traditional methods, this approach enables faster discovery, reducing drug development timelines from years to months.

Drug resistance is a growing global threat, with the World Health Organisation predicting 10 million annual deaths by 2050 if unchecked. MIT’s AI bypasses resistance, clearing infections in lab and animal tests with minimal toxicity.

Beyond antibiotics, this achievement highlights the broader potential of AI in pharmaceutical research. Smaller biotech firms could leverage AI for rapid drug design, reducing costs and opening new pathways for addressing urgent medical challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese researchers advance atom-based quantum computing with massive atom array

Chinese physicist Pan Jianwei’s team created the world’s largest atom array, arranging over 2,000 rubidium atoms for quantum computing. The breakthrough at the University of Science and Technology of China could enable atom-based quantum computers to scale to tens of thousands of qubits.

Researchers used AI and optical tweezers to position all atoms simultaneously, completing the array in 60 milliseconds. The system achieved 99.97 percent accuracy for single-qubit operations and 99.5 percent for two-qubit operations, with 99.92 percent accuracy in qubit state detection.

Atom-based quantum computing is more promising for its stability and control than superconducting circuits or trapped ions. Until now, arrays were limited to a few hundred atoms, as moving each into position individually was slow and challenging.

Future work aims to expand array sizes further using stronger lasers and faster light modulators. Researchers hope that perfectly arranging tens of thousands of atoms leads to fully reliable and scalable quantum computers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google backs workforce and AI education in Oklahoma with a $9 billion investment

Google has announced a $9 billion investment in Oklahoma over the next two years to expand cloud and AI infrastructure.

The funds will support a new data centre campus in Stillwater and an expansion of the existing facility in Pryor, forming part of a broader $1 billion commitment to American education and competitiveness.

The announcement was made alongside Governor Kevin Stitt, Alphabet and Google executives, and community leaders.

Alongside the infrastructure projects, Google funds education and workforce initiatives with the University of Oklahoma and Oklahoma State University through the Google AI for Education Accelerator.

Students will gain no-cost access to Career Certificates and AI training courses, helping them acquire critical AI and job-ready skills instead of relying on standard curricula.

Additional funding will support ALLIANCE’s electrical training to expand Oklahoma’s electrical workforce by 135%, creating the talent needed to power AI-driven energy infrastructure.

Google described the investment as part of an ‘extraordinary time for American innovation’ and a step towards maintaining US leadership in AI.

The move also addresses national security concerns, ensuring the country has the infrastructure and expertise to compete with domestic rivals like OpenAI and Anthropic, as well as international competitors such as China’s DeepSeek.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India must ramp up AI and chip production to meet global competition

At the Emkay Confluence in Mumbai, Chief Economic Adviser V. Anantha Nageswaran emphasised that while trade-related concerns remain significant, they must not obscure the urgent need for India to boost its AI and semiconductor sectors.

He pointed to AI’s transformative economic potential and strategic importance, warning that India must act decisively to remain competitive as the United States and China advance aggressively in these domains.

By focusing on energy transition, energy security, and enhanced collaboration across sectors, Nageswaran argued that India can strengthen its innovation capacity and technological self-reliance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!