Machine learning helps prevent disruptions in fusion devices

Researchers at MIT have developed a predictive model that could make fusion power plants more reliable and safe. The approach uses machine learning and physics-based simulations to predict plasma instabilities and prevent damage during tokamak shutdowns.

Experimental tokamaks use strong magnets to contain plasma hotter than the sun’s core. They often face challenges in safely ramping down plasma currents that circulate at extreme speeds and temperatures.

The model was trained and tested on data from the Swiss TCV tokamak. Combining neural networks with physics simulations, the team achieved accurate predictions using few plasma pulses, saving costs and overcoming limited experimental data.

The system can now generate practical ‘trajectories’ for controllers to adjust magnets and temperatures, helping to safely manage plasma during shutdowns.

Researchers say the method could be particularly important as fusion devices scale up to grid-level energy production. High-energy plasmas in larger reactors pose greater risks, and uncontrolled terminations could damage the machine.

The new model allows operators to carefully balance rampdowns, avoiding disruptions and ensuring safer, more efficient operation.

Work on the predictive model is part of wider collaboration with Commonwealth Fusion Systems and supported by the EUROfusion Consortium and Swiss research institutions. Scientists see it as a crucial step toward making fusion a practical, reliable, and sustainable energy source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini 2.5 Computer Use brings human-like interface control to AI agents

Google DeepMind has launched the Gemini 2.5 Computer Use model, a specialised version of Gemini 2.5 Pro designed to let AI agents interact directly with digital user interfaces.

Available in preview through the Gemini API, developers can build agents capable of performing web and mobile tasks such as form-filling, navigation and interaction within apps.

Unlike models limited to structured APIs, Gemini 2.5 Computer Use can reason visually about what it sees on screen, making it possible to complete tasks requiring clicks, scrolls and text input.

While maintaining low latency, it outperforms rivals on several benchmarks, including Browserbase’s Online-Mind2Web and WebVoyager.

The model’s safety design includes per-step risk checks, built-in safeguards against misuse and developer-controlled restrictions on high-risk actions such as payments or security changes.

Google has already integrated it into systems like Project Mariner, Firebase Testing Agent and AI Mode in Search, while early testers report faster, more reliable automation.

Gemini 2.5 Computer Use is now available in public preview via Google AI Studio and Vertex AI, enabling developers to experiment with advanced interface-aware agents that can perform complex digital workflows securely and efficiently.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tools make Facebook Reels more engaging than ever

Facebook enhances how users find and share Reels, focusing on personalisation and social interaction.

The platform’s new recommendation engine learns user interests faster, presenting more relevant and up-to-date content. Video viewing time in the US has risen over 20% year-on-year, reflecting the growing appeal of both short and long-form clips.

The update introduces new ‘friend bubbles’ showing which Reels or posts friends have liked, allowing users to start private chats instantly.

A feature that encourages more spontaneous conversation and discovery through shared interests. Facebook’s ‘Save’ option has also been simplified, letting users collect favourite posts and Reels in one place, while improving future recommendations.

AI now plays a larger role in content exploration, offering suggested searches on certain Reels to help users find related topics without leaving the player. By combining smarter algorithms with stronger social cues, Facebook aims to make video discovery more meaningful and community-driven.

Further personalisation tools are expected to follow as the platform continues refining its Reels experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study explores AI’s role in future-proofing buildings

AI could help design buildings that are resilient to both climate extremes and infectious disease threats, according to new research. The study, conducted in collaboration with Charles Darwin University, examines the application of AI in smart buildings, with a focus on energy efficiency and management.

Buildings account for over two-thirds of global carbon emissions and energy consumption, but reducing consumption remains challenging and costly. The study highlights how AI can enhance ventilation and thermal comfort, overcoming the limitations of static HVAC systems that impact sustainability and health.

Researchers propose adaptive thermal control systems that respond in real-time to occupancy, outdoor conditions, and internal heat. Machine learning can optimise temperature and airflow to balance comfort, energy efficiency, and infection control.

A new framework enables designers and facility managers to simulate thermal scenarios and assess their impact on the risk of airborne transmission. It is modular and adaptable to different building types, offering a quantitative basis for future regulatory standards.

The study was conducted with lead author Mohammadreza Haghighat from the University of Tehran and CDU’s Ehsan Mohammadi Savadkoohi. Future work will integrate real-time sensor data to strengthen building resilience against future climate and health threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic launches Bengaluru office to drive responsible AI in India

AI firm Anthropic, the company behind the Claude AI chatbot, is opening its first office in India, choosing Bengaluru as its base.

A move that follows OpenAI’s recent expansion into New Delhi, underlining India’s growing importance as a hub for AI development and adoption.

CEO Dario Amodei said India’s combination of vast technical talent and the government’s commitment to equitable AI progress makes it an ideal location.

The Bengaluru office will focus on developing AI solutions tailored to India’s needs in education, healthcare, and agriculture sectors.

Amodei is visiting India to strengthen ties with enterprises, nonprofits, and startups and promote responsible AI use that is aligned with India’s digital growth strategy.

Anthropic plans further expansion in the Indo-Pacific region, following its Tokyo launch, later in the year.

Chief Commercial Officer Paul Smith noted the rising demand among Indian companies for trustworthy, scalable AI systems. Anthropic’s Claude models are already accessible in India through its API, Amazon Bedrock, and Google Cloud Vertex AI.

The company serves more than 300,000 businesses worldwide, with nearly 80 percent of usage outside the US.

India has become the second-largest market for Claude, with developers using it for tasks such as mobile UI design and web app debugging.

Anthropic also enhances Claude’s multilingual capabilities in major Indic languages, including Hindi, Bengali, and Tamil, to support education and public sector projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bulgaria eyes AI gigafactory partnership with IBM

Bulgaria is considering building an AI gigafactory in partnership with IBM and the European Commission, Prime Minister Rosen Zhelyazkov announced after meeting with IBM executives in Sofia. The project aims to attract large-scale high-tech investment and strengthen Europe’s AI infrastructure.

The proposed facility would feature over 100,000 advanced GPU chips and require up to 500 megawatts of power. The initial phase alone is expected to need around 70 megawatts, highlighting the scale of the planned operation.

Funding could come through a public-private partnership, with the European Commission covering up to 17 percent of capital costs and EU member states contributing additional support for this Bulgarian project.

IBM is considered a strategic technology partner, bringing expertise in cloud computing, cybersecurity, and AI systems. The first gigafactories across Europe are expected to begin operations between 2027 and 2028, aligning with the EU’s plan to mobilise €200 billion for AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Policy hackathon shapes OpenAI proposals ahead of EU AI strategy

OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.

The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.

Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.

Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.

The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic’s Claude to power Deloitte’s new enterprise AI expansion

Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.

The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.

Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.

A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.

The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!