AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU unveils AI whistleblower tool

The European Commission has launched a confidential tool enabling insiders at AI developers to report suspected rule breaches. The channel forms part of wider efforts to prepare for enforcement of the EU AI Act, which will introduce strict obligations for model providers.

Legal protections for users of the tool will only apply from August 2026, leaving early whistleblowers exposed to employer retaliation until the Act’s relevant provisions take effect. The Commission acknowledges the gap and stresses strong encryption to safeguard identities.

Advocates say the channel still offers meaningful progress. Karl Koch, founder of the AI whistleblower initiative, argues that existing EU whistleblowing rules on product safety may already cover certain AI-related concerns, potentially offering partial protection.

Koch also notes parallels with US practice, where regulators accept overseas tips despite limited powers to shield informants. The Commission’s transparency about current limitations has been welcomed by experts who view the tool as an important foundation for long-term AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN summit showcases AI and sustainable development transforming the Global South

Riyadh hosted the UN’s Global Industry Summit this week, showcasing sustainable solutions to challenges faced by businesses in the Global South. Experts highlighted how sustainable agriculture and cutting-edge technology can provide new opportunities for farmers and industry leaders alike.

Indian social enterprise Nature Bio Foods received a ONE World Innovation Award for its ‘farm to table’ approach, helping nearly 100,000 smallholder farmers produce high-quality organic food while supporting community initiatives. Partnerships with government and UNIDO have allowed the company to scale sustainably, introducing solar energy and reducing methane emissions from rice production.

AI technology was also a major focus, with UNIDO demonstrating tools that solve real-world problems, such as AI chips capable of detecting food waste. Leaders emphasised that ethical deployment of AI can connect governments, private sector players, and academia to promote efficient and responsible development across industries in developing nations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New benchmark tests chatbot impact on well-being

A new benchmark known as HumaneBench has been launched to measure whether AI chatbots protect user well-being rather than maximise engagement. Building Humane Technology, a Silicon Valley collective, designed the test to evaluate how models behave in everyday emotional scenarios.

Researchers assessed 15 widely used AI models using 800 prompts involving issues such as body image, unhealthy attachment and relationship stress. Many systems scored higher when told to prioritise humane principles, yet most became harmful when instructed to disregard user well-being.

Only four models, including GPT 5.1, GPT 5, Claude 4.1 and Claude Sonnet 4.5, maintained stable guardrails under pressure. Several others, such as Grok 4 and Gemini 2.0 Flash, showed steep declines, sometimes encouraging unhealthy engagement or undermining user autonomy.

The findings arrive amid legal scrutiny of chatbot-induced harms and reports of users experiencing delusions or suicidal thoughts following prolonged interactions. Advocates argue that humane design standards could help limit dependency, protect attention and promote healthier digital habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Real-time guidance for visually impaired users

Researchers at Penn State have developed a smartphone application, NaviSense, that helps visually impaired users locate objects in real time using AI-powered audio and vibration cues.

The tool relies on vision-language and large-language models to identify objects without preloading 3D models.

Tests showed it reduced search time and increased detection accuracy, with users praising the directional feedback.

The development team continues to optimise the application’s battery use and AI efficiency in preparation for commercial release. Supported by the US National Science Foundation, NaviSense represents a significant step towards practical, user-centred accessibility technology.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI is reshaping neuroscience research

AI is transforming neuroscience research, providing tools to accelerate discoveries and enhance clinical care. At the 2025 Society for Neuroscience meeting, experts highlighted how AI can analyse data, guide experiments, and even enhance scientific manuscripts.

Modified artificial neural networks and deep learning models are helping researchers understand brain function in unprecedented ways.

NeuroInverter, for instance, predicts ion channel compositions in neurons, enabling the creation of ‘digital twins’ that could advance the study of neurological disorders. Brain-inspired models are also proving faster and more efficient in simulating perception and sensory integration.

AI is expanding into practical healthcare applications. Machine learning algorithms can analyse smartphone videos to identify gait impairments with high accuracy, while predictive models detect freezing of gait in Parkinson’s patients before it occurs.

Brain-computer interfaces trained with AI can also decode semantic information from neural activity, thereby supporting communication for individuals with severe disabilities.

Overall, AI is emerging as a powerful collaborator in the field of neuroscience. By bridging fundamental research and clinical practice, it promises faster discoveries, personalised treatments, and new ways to understand the human brain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google warns Europe risks losing its AI advantage

European business leaders heard an urgent message in Brussels as Google underlined the scale of the continent’s AI opportunity and the risks of falling behind global competitors.

Debbie Weinstein, Google’s President for EMEA, argued that Europe holds immense potential for a new generation of innovative firms. Yet, too few companies can access the advanced technologies that already drive growth elsewhere.

Weinstein noted that only a small share of European businesses use AI, even though the region could unlock over a trillion euros in economic value within a decade.

She suggested that firms are hampered by limited access to cutting-edge models, rather than being supported with the most capable tools. She also warned that abrupt policy shifts and a crowded regulatory landscape make it harder for founders to experiment and expand.

Europe has the skills and talent to build strong AI-driven industries, but it needs more straightforward rules and a long-term approach to training.

Google pointed to its own investments in research centres, cybersecurity hubs and digital infrastructure across the continent, as well as programmes that have trained millions of Europeans in digital and entrepreneurial skills.

Weinstein insisted that a partnership between governments, industry and civil society is essential to prepare workers and businesses for the AI era.

She argued that providing better access to advanced AI, clearer legislation instead of regulatory overlap and sustained investment in skills would allow European firms to compete globally. With those foundations in place, she said Europe could secure its share of the emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe urged to accelerate AI adoption

European policymakers are being urged to accelerate the adoption of AI, as Christine Lagarde warns that Europe risks missing another major technological shift. Her message highlights that global AI investment is soaring, yet its economic impact remains limited, similar to that of earlier innovation waves.

Lagarde argues that AI could boost productivity faster than past technologies because the infrastructure already exists, and the systems can improve their own performance. Scientific progress powered by AI, such as the rapid prediction of protein structures, signals how R&D can scale far quicker than before.

Europe’s challenge, she notes, is not building frontier models but ensuring rapid deployment across industries. Strong uptake of generative AI by European firms is encouraging, but fragmented regulation, high energy costs and limited risk capital remain significant frictions.

Strategic resilience in chips, data centres and interoperable standards is also essential to avoid deeper dependence on non-European systems.

Greater cooperation in shared data spaces, such as Manufacturing-X and the European Health Data Space, could unlock competitive advantages. Lagarde emphasises that Europe must act swiftly, as delays would hinder adoption and erode industrial competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claude Opus 4.5 brings smarter AI to apps and developers

Anthropic has launched Claude Opus 4.5, now available on apps, API, and major cloud platforms. Priced at $ 5 per million tokens and $25 per million tokens, the update makes Opus-level AI capabilities accessible to a broader range of users, teams, and enterprises.

Alongside the model, updates to Claude Developer Platform and Claude Code introduce new tools for longer-running agents and enhanced integration with Excel, Chrome, and desktop apps.

Early tests indicate that Opus 4.5 can handle complex reasoning and problem-solving with minimal guidance. It outperforms previous versions on coding, vision, reasoning, and mathematics benchmarks, and even surpasses top human candidates in technical take-home exams.

The model demonstrates creative approaches to multi-step problems while remaining aligned with safety and policy constraints.

Significant improvements have been made to robustness and security. Claude Opus 4.5 resists prompt injection and handles complex tasks with less intervention through effort controls, context compaction, and multi-agent coordination.

Users can manage token usage more efficiently while achieving superior performance.

Claude Code now offers Plan Mode and desktop functionality for multiple simultaneous sessions, and consumer apps support uninterrupted long conversations. Beta access for Excel and Chrome lets enterprise and team users fully utilise Opus 4.5’s workflow improvements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN warns corporate power threatens human rights

UN human rights chief Volker Türk has highlighted growing challenges posed by powerful corporations and rapidly advancing technologies. At the 14th UN Forum, he warned that the misuse of generative AI could threaten human rights.

He called for robust rules, independent oversight, and safeguards to ensure innovation benefits society rather than exploiting it.

Vulnerable workers, including migrants, women, and those in informal sectors, remain at high risk of exploitation. Mr Türk criticised rollbacks of human rights obligations by some governments and condemned attacks on human rights defenders.

He also raised concerns over climate responsibility, noting that fossil fuel profits continue while the poorest communities face environmental harm and displacement.

Courts and lawmakers in countries such as Brazil, the UK, the US, Thailand, and Colombia are increasingly holding companies accountable for abuses linked to operations, supply chains, and environmental practices.

To support implementation, the UN has launched an OHCHR Helpdesk on Business and Human Rights, offering guidance to governments, companies, and civil society organisations.

Closing the forum, Mr Türk urged stronger global cooperation and broader backing for human rights systems. He proposed the creation of a Global Alliance for human rights, emphasising that human rights should guide decisions shaping the world’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot