Microsoft and NVIDIA unveil AI tools for nuclear energy permitting and operations

Microsoft has announced an AI collaboration with NVIDIA to support nuclear energy projects across permitting, design, construction, and operations. In a post published on 24 March, the tech conglomerate said the initiative aims to provide end-to-end tools for the nuclear sector, focusing on streamlining permitting, accelerating design, and optimising operations.

Microsoft frames the effort within a broader energy challenge, arguing that rising power demand and long project timelines are putting pressure to accelerate the delivery of firm, carbon-free power. The company says customised engineering, fragmented data, and manual regulatory review slow nuclear projects. It presents AI as a way to make project development more repeatable, traceable, secure, and predictable.

The post says the collaboration spans the full lifecycle of a nuclear plant. Microsoft describes a model in which digital twins, high-fidelity simulations, and AI-assisted workflows support design and engineering, licensing and permitting, construction and delivery, and operations and maintenance.

According to the company, engineers would be able to reuse design patterns, model the impact of changes before construction begins, and link project decisions to supporting evidence and applicable rules. Microsoft also says generative AI can assist with drafting and gap analysis in permit documentation, while predictive modelling and operational digital twins can support anomaly detection and maintenance planning.

Microsoft says traceability and auditability are central to the approach. The company lists four intended qualities of the system: traceable records linking engineering decisions to evidence and regulations, audit-ready documentation, secure use within a governed environment, and predictable outcomes through simulations intended to identify delays before they occur in the real world.

Several case examples are included in the post. Microsoft says Aalo Atomics reduced the permitting process by 92% using its Generative AI for Permitting solution and estimates annual savings of 80$ million.

Aalo Atomics Chief Technology Officer Yasir Arafat is quoted as saying: ‘Two things matter most: enterprise-scale complexity and mission-critical reliability. We’re deploying something complex at a scale only a company like Microsoft really understands. There’s no room for anything less than proven reliability.’

Microsoft also says Southern Nuclear has deployed Copilot agents across engineering and licensing workstreams to improve consistency, reuse knowledge faster, and support decision-making. Idaho National Laboratory is described as an early adopter in the US federal context, with Microsoft saying the lab is using AI capabilities to automate the assembly of engineering and safety analysis reports and to create standard methodologies for regulators to adopt the tools safely.

The post also expands beyond those three examples. Microsoft says Everstar, described as an NVIDIA Inception startup, is bringing domain-specific AI for nuclear to Azure to support project workflows and governed data pipelines.

Everstar Chief Executive Officer Kevin Kong is quoted as saying: ‘The nuclear industry has been bottlenecked by documentation burden and regulatory complexity for decades. This partnership means our customers get the secure, scalable cloud deployments they demand. It’s a significant step toward making nuclear power fast, safe, and unstoppable.’

Microsoft also says Atomic Canyon’s Neutron platform is available on the Microsoft Marketplace for nuclear developers via established procurement channels.

At the technical level, Microsoft says the collaboration brings together NVIDIA Omniverse, NVIDIA Earth-2, NVIDIA CUDA-X, NVIDIA AI Enterprise, PhysicsNeMo, Isaac Sim, and Metropolis with Microsoft Generative AI for Permitting Solution Accelerator and Microsoft Planetary Computer. The company presents the stack as a digital ecosystem for nuclear energy on Azure.

The official post is a corporate announcement rather than an independent assessment of the approach’s effectiveness. The published note outlines the company’s intended use cases, named partners, and customer examples, but it does not provide a third-party evaluation of the broader claims regarding delivery speed, regulatory confidence, or sector-wide impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google sets 2029 deadline for post-quantum cryptography migration

A transition to post-quantum cryptography by 2029 is being led by Google, aiming to secure digital systems against future quantum computing threats instead of relying on existing encryption standards.

The move reflects growing concern that advances in quantum hardware and algorithms could eventually undermine current cryptographic protections, particularly through attacks that store encrypted data today for decryption in the future.

Quantum computers are expected to challenge widely used encryption and digital signature systems, prompting the need for early transition strategies.

Google has updated its threat model to prioritise authentication services, recognising that digital signatures pose a critical vulnerability if not addressed before the arrival of quantum machines capable of cryptanalysis.

The company is encouraging broader industry action to accelerate migration efforts and reduce long-term security risks.

As part of its strategy, Google is integrating post-quantum cryptography into its products and services.

Android 17 will include quantum-resistant digital signature protection aligned with standards developed by the US’s National Institute of Standards and Technology. At the same time, support has already been introduced in Google Chrome and cloud platforms.

These measures aim to bring advanced security technologies directly to users instead of limiting them to experimental environments.

By setting a clear timeline, Google aims to instil urgency and direction across the wider technology sector.

The transition to post-quantum cryptography is expected to become a critical step in maintaining online security, ensuring that digital infrastructure remains resilient as quantum computing capabilities continue to evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland publishes AI guidance for schools

The Scottish government has published national guidance on the use of AI in schools, aiming to support the safe and ethical adoption of AI in classrooms. The document provides advice for teachers and pupils as AI use continues to expand across society.

The guidance outlines potential benefits of AI alongside risks that need to be considered, and includes examples of appropriate classroom use. It was developed with the EIS teaching union, local government and Education Scotland.

Education Secretary Jenny Gilruth said AI should support creativity, critical thinking and personalised learning while protecting pupils’ rights and privacy. She added that technology must not replace teachers or human relationships in education.

Andrea Bradley said AI should remain a tool for teachers and not replace professional judgement. The non-statutory guidance allows schools and local authorities flexibility to develop their own policies as AI continues to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

CFTC launches AI and crypto innovation task force

The Commodity Futures Trading Commission (CFTC), an independent agency of the United States federal government, announced the creation of an Innovation Task Force to support the development of new technologies in US derivatives markets. Chairman Michael S. Selig leads the initiative and focuses on establishing clear regulatory approaches.

The task force will work with the Innovation Advisory Committee to develop frameworks covering crypto assets, blockchain technologies, AI and autonomous systems, and prediction markets. Authorities said the aim is to provide clarity for innovators building new financial products.

According to Selig, clearer rules are intended to support responsible innovation and ensure market participants remain competitive. The task force is also expected to help implement the Commission’s broader innovation agenda.

Coordination with other federal bodies is planned, including collaboration with the US Securities and Exchange Commission and its Crypto Task Force. Michael J. Passalacqua, senior advisor to the Chairman, has been appointed to lead the initiative.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

National AI readiness initiative introduced in the US

The US National Science Foundation has introduced the NSF TechAccess: AI-Ready America initiative to expand access to AI education, tools, and training. The programme is designed to ensure workers, businesses, and communities can actively participate in the growing AI-driven economy.

Federal collaboration forms a core part of the initiative, bringing together the Department of Agriculture’s National Institute of Food and Agriculture, the Department of Labour, and the Small Business Administration.

The effort aims to close gaps in AI capability by improving literacy, supporting small businesses, and building hands-on learning pathways such as internships and applied training.

A network of up to 56 state and territory-based Coordination Hubs will be created to coordinate local AI adoption strategies. Each hub will receive up to $1 million in annual funding over three years, with the potential for an extension based on continued need and impact.

Further funding rounds are planned to appoint a national coordination lead and support pilot projects that scale AI readiness solutions. The initiative is part of a broader strategy informed by the White House AI Action Plan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cross-device browsing arrives with Samsung Browser for Windows

Samsung Electronics has launched Samsung Browser for Windows, expanding its mobile browsing experience to desktop users. The release focuses on cross-device continuity, allowing users to resume browsing sessions seamlessly between smartphones and PCs.

Users can move between devices without losing progress, extending beyond basic bookmark and history syncing. Integration with Samsung Pass also enables secure storage of personal data, simplifying logins and autofill across websites.

A key addition is the introduction of agentic AI capabilities developed in partnership with Perplexity. The built-in assistant understands page context and user activity, helping manage tabs, summarise content, and deliver more precise search results without leaving the browser.

Availability covers Windows 10 and 11 devices, while AI features are currently limited to the US and South Korea. A wider rollout is expected as Samsung continues to expand its intelligent browsing ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New UK rules target foreign influence and crypto donations

The UK government has announced sweeping reforms to political donations, introducing a £100,000 annual cap on contributions from overseas electors. The move targets concerns that individuals living abroad could exert disproportionate financial influence on domestic politics.

Cryptocurrency donations have also been banned with immediate effect, reflecting fears over anonymity and the difficulty of tracing funds. Authorities warn that digital assets risk enabling untraceable political funding until stronger regulation is in place.

Both measures will apply retrospectively, requiring political parties and candidates to return any unlawful donations within 30 days once the legislation takes effect. Enforcement action may follow for non-compliance, signalling a stricter approach to financial oversight.

Reforms stem from the Rycroft Review, which highlighted vulnerabilities in the UK’s electoral system linked to foreign interference. Further changes, including stronger Electoral Commission powers and tighter donor checks, are expected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot