Sam Altman admits OpenAI holds back stronger AI models

OpenAI recently unveiled GPT-5, a significant upgrade praised for its advances in accuracy, reasoning, writing, coding and multimodal capabilities. The model has also been designed to reduce hallucinations and excessive agreeableness.

Chief executive Sam Altman has admitted that OpenAI has even more powerful systems that cannot be released due to limited capacity.

Altman explained that the company must make difficult choices, as existing infrastructure cannot yet support the more advanced models. To address the issue, OpenAI plans to invest in new data centres, with spending potentially reaching trillions of dollars.

The shortage of computing power has already affected operations, including a cutback in image generation earlier in the year, following the viral Studio Ghibli-style trend.

Despite criticism of GPT-5 for offering shorter responses and lacking emotional depth, ChatGPT has grown significantly.

Altman said the platform is now the fifth most visited website worldwide and is on track to overtake Instagram and Facebook. However, he acknowledged that competing with Google will be far harder.

OpenAI intends to expand beyond ChatGPT with new standalone applications, potentially including an AI-driven social media service.

The company also backs Merge Labs, a brain-computer interface rival to Elon Musk’s Neuralink. It has partnered with former Apple designer Jony Ive to create a new AI device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s promise comes with a heavy environmental price

Governments worldwide are racing to harness the economic potential of AI, but the technology’s environmental toll is growing, and it is impossible to ignore. In the US, President Trump is calling for a ten-year freeze on AI regulation, while the UK is planning ‘AI growth zones’ filled with data centres. Yet these same centres consume enormous amounts of electricity and water, resources already under strain globally.

From 2027, AI systems are expected to use at least four billion cubic metres of water annually, five times Denmark’s yearly consumption. That raises a pressing dilemma: How to support innovation without deepening an already critical global water crisis, where one in four people lacks access to clean drinking water?

The paradox is stark. AI is driving breakthroughs in medicine, agriculture, and climate science, with systems predicting cancer-fighting protein structures and detecting deforestation from space.

AI could save the UK government tens of billions in the public sector by automating routine tasks. But powering these advances requires vast, always-on data centres, which last year in the US alone consumed enough electricity for seven million homes.

Regulators are beginning to respond. The EU now requires big data centres to disclose energy and water use.

The UK, however, has yet to weave AI infrastructure into its climate and planning strategies, a gap that could worsen grid congestion and even delay housing projects. Public concern is mounting: four of five Britons believe AI needs stricter oversight.

Experts say transparency is key. Mandatory reporting on emissions, water use, and energy consumption could give citizens and policymakers the data they need to act. Incentives for efficiency, renewable energy requirements, and creative solutions like using data centre heat to warm homes are already being tested abroad.

The technology’s potential remains enormous. Studies suggest AI could cut more than five billion tonnes of global emissions by 2035, with energy, transport, and food applications. But unless sustainability becomes central to the UK’s AI strategy, the race to innovate risks becoming a sprint into environmental crisis.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Royal Wolverhampton NHS Trust reshapes care with AI and robotic

At The Royal Wolverhampton NHS Trust, AI is beginning to transform how doctors and patients experience healthcare, cutting down on paperwork and making surgery more precise. Hospital leaders say the shift is already delivering tangible results, from faster clinical letters to reduced patient hospital visits.

One of the most impactful innovations is CLEARNotes, a system that records and summarises doctor-patient consultations. Instead of doctors spending days drafting clinic letters, the technology reduces turnaround time from as long as a week to just a day or two. Clinicians report that this tool saves time and improves productivity by as much as 25% in some clinics while ensuring that safety and governance standards remain intact.

Surgery is another area where technology is making its mark. The trust operates two Da Vinci Xi robots, now a regular feature in complex procedures, including urology, colorectal, cardiothoracic, and gynaecology cases. Compared to traditional keyhole surgery, robotic operations give surgeons better control and dexterity through a console linked to a 3D camera, while patients benefit from shorter stays and faster recoveries.

Digital tools also shape the patient’s journey before surgery begins. With My Pre-Op, patients complete their pre-operative questionnaires online from home, reducing unnecessary hospital visits and helping to ensure they are in the best condition for their operation. Hospital staff say this streamlines both efficiency and patient comfort.

The innovations recently drew praise from Science, Innovation and Technology Secretary Peter Kyle, who visited the hospital to see the systems in action. He described the trust’s embrace of AI and robotics as ‘inspiring,’ noting how safe experimentation and digital adoption already translate into improved care and efficiency. For Wolverhampton’s healthcare providers, the changes represent not just a technological upgrade but a glimpse into the future of how the NHS might deliver care across the country.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore sets jobs as top priority amid global uncertainty

Singapore’s Prime Minister Lawrence Wong said employment for citizens will remain the government’s top priority as the nation confronts global trade tensions and the rapid advance of AI.

Speaking at the annual National Day Rally to mark Singapore’s 60th year, Wong pointed to the risks created by the USChina rivalry, renewed tariff policies under President Donald Trump, and the pressure technology places on workers.

In his first primary address since the May election, Wong emphasised the need to reinforce the trade-reliant economy, expand social safety nets and redevelop parts of the island.

He pledged to protect Singaporeans from external shocks by maintaining stability instead of pursuing risky shifts. ‘Ultimately, our economic strategy is about jobs, jobs and jobs. That’s our number one priority,’ he said.

The government has introduced new welfare measures, including the country’s first unemployment benefits and wider subsidies for food, utilities and education.

Wong also announced initiatives to help enterprises use AI more effectively, such as a job-matching platform and a government-backed traineeship programme for graduates.

Looking ahead, Wong said Singapore would draw up a new economic blueprint to secure its future in a world shaped by protectionism, climate challenges and changing energy needs.

After stronger-than-expected results in the first half of the year, the government recently raised its growth forecast for 2025 to between 1.5% and 2.5%.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GenAI app usage up 50% as firms struggle with oversight

Enterprise employees are increasingly building their own AI tools, sparking a surge in shadow AI that raises security concerns.

Netskope reports a 50% rise in generative AI platform use, with over half of current adoption estimated to be unsanctioned by IT.

Platforms like Azure OpenAI, Amazon Bedrock, and Vertex AI lead this trend, allowing users to connect enterprise data to custom AI agents.

The growth of shadow AI has prompted calls for better oversight, real-time user training, and updated data loss prevention strategies.

On-premises deployment is also increasing, with 34% of firms using local LLM interfaces like Ollama and LM Studio. Security risks grow as AI agents retrieve data using API calls beyond browsers, particularly from OpenAI and Anthropic endpoints.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI tool offering flexible travellers cheap flights

Google has rolled out Flight Deals, a new AI‑powered tool for flexible, budget‑conscious travellers within Google Flights. It allows users to type natural‑language descriptions of their ideal trip, such as favourite activities or timeframe, and receive bargain flight suggestions in return.

Powered by Gemini, the feature parses conversational inputs and taps real‑time flight data from multiple airlines and agencies.

The tool identifies low fares and even proposes destinations users might not have considered, ranking options by percentage savings or lowest price.

Currently in beta, Flight Deals is available in the US, Canada, and India without special opt‑in. It is also accessible via the Google Flights menu.

Traditional Google Flights remains available, with a new option to exclude basic economy fares in the US and Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers use AI to speed up quantum computing experiments

AI has been used to rapidly assemble arrays of atoms that could one day power quantum computers. A team led by physicist Jian-Wei Pan at the University of Science and Technology of China demonstrated how an AI model can calculate the best way to arrange neutral atoms, a long-standing challenge in the field.

The researchers showed that their system could rearrange up to 2,024 rubidium atoms into precise grid patterns in just 60 milliseconds. By comparison, a previous attempt last year arranged 800 atoms without AI but required a full second.

To showcase the model’s speed, the team even used it to create an animated image of Schrödinger’s cat by guiding atoms into patterns with laser light.

Neutral atom arrays are one of the most promising approaches to building quantum computers, as the trapped atoms can maintain their fragile quantum states for relatively long periods.

The AI model was trained on different atom configurations and patterns of laser light, allowing it to quickly determine the most efficient hologram needed to reposition atoms into complex 2D and 3D shapes.

Experts in the field have welcomed the breakthrough. Mark Saffman, a physicist at the University of Wisconsin–Madison, noted that producing holograms for larger arrays usually requires intensive calculations.

The ability of AI to handle this process so efficiently, he said, left many colleagues ‘really impressed.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 impresses in reasoning but stumbles in flawless coding

OpenAI’s newly released GPT-5 draws praise and criticism in equal measure, as developers explore its potential for transforming software engineering.

Launched on 7 August 2025, the model has impressed with its ability to reason through complex problems and assist in long-term project planning. Yet, engineers testing it in practice note that while it can propose elegant solutions, its generated code often contains subtle errors, demanding close human oversight.

Benchmark results showcase GPT-5’s strength. The model scored 74.9% on the SWE-bench Verified test, outperforming predecessors in bug detection and analysis. Integrated into tools such as GitHub Copilot, it has already boosted productivity for large-scale refactoring projects, with some testers praising its conversational guidance.

Despite these gains, developers report mixed outcomes: successful brainstorming and planning, but inconsistent when producing flawless, runnable code.

The rollout also includes GPT-5 Mini, a faster version for everyday use in platforms like Visual Studio Code. Early users highlight its speed but point out that effective prompting remains essential, as the model’s re-architected interaction style differs from GPT-4.

Critics argue it still trails rivals such as Anthropic’s Claude 4 Sonnet in error-free generation, even as it shows marked improvements in scientific and analytical coding tasks.

Experts suggest GPT-5 will redefine developer roles rather than replace them, shifting focus toward oversight and validation. By acting as a partner in ideation and review, the model may reduce repetitive coding tasks while elevating strategic engineering work.

For now, OpenAI’s most advanced system sets a high bar for intelligent assistance but remains a tool that depends on skilled humans to achieve reliable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude AI gains power to end harmful chats

Anthropic has unveiled a new capability in its Claude AI models that allows them to end conversations they deem harmful or unproductive.

The feature, part of the company’s more exhaustive exploration of ‘model welfare,’ is designed to allow AI systems to disengage from toxic inputs or ethical contradictions, reflecting a push toward safer and more autonomous behaviour.

The decision follows an internal review of over 700,000 Claude interactions, where researchers identified thousands of values shaping how the system responds in real-world scenarios.

By enabling Claude to exit problematic exchanges, Anthropic hopes to improve trustworthiness while protecting its models from situations that might degrade performance over time.

Industry reaction has been mixed. Many researchers praised the step as a blueprint for responsible AI design. In contrast, others expressed concern that allowing models to self-terminate conversations could limit user engagement or introduce unintended biases.

Critics also warned that the concept of model welfare risks over-anthropomorphising AI, potentially shifting focus away from human safety.

The update arrives alongside other recent Anthropic innovations, including memory features that allow users to maintain conversation history. Together, these changes highlight the company’s balanced approach: enhancing usability where beneficial, while ensuring safeguards are in place when interactions become potentially harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Geoffrey Hinton warns AI could destroy humanity

AI pioneer Geoffrey Hinton has warned that AI could one day wipe out humanity if its growth is unchecked.

Speaking at the Ai4 conference in Las Vegas, the former Google executive estimated a 10 to 20 percent chance of such an outcome and criticised the approach taken by technology leaders.

He argued that efforts to keep humans ‘dominant’ over AI will fail once systems become more intelligent than their creators. According to Hinton, powerful AI will inevitably develop goals such as survival and control, making it increasingly difficult for people to restrain its influence.

In an interview with CNN, Hinton compared the potential future to a parent-child relationship, noting that AI systems may manipulate humans just as easily as an adult can bribe a child.

He suggested giving AI ‘maternal instincts’ to prevent disaster so that the technology genuinely cares about human well-being.

Hinton, often called the ‘Godfather of AI’ for his pioneering work in neural networks, cautioned that society risks creating beings that will ultimately outsmart and overpower us without embedding such safeguards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!