Genetic experts and Microsoft design AI assistant to streamline sequencing

Microsoft, Drexel University, and the Broad Institute have developed a generative AI assistant to support genome sequencing. The study in ACM Transactions on Interactive Intelligent Systems demonstrates how AI can accelerate searching, filtering, and synthesising data in rare disease diagnosis.

Whole genome sequencing often takes weeks and yields a diagnosis in fewer than half of cases. Analysts must decide which unsolved cases to revisit as new research appears. The AI assistant flags cases for reanalysis and compiles new gene and variant data into a clear, usable format.

The team interviewed 17 genetics professionals to map workflows and challenges before co-designing the prototype. Sessions focused on problems such as data overload, slow collaboration, and difficulty prioritising unsolved cases, helping ensure the tool addressed real-world pain points.

The prototype enables collaborative sensemaking, allowing users to edit and verify AI-generated content. It offers flexible filtering to surface the most relevant evidence while keeping a comprehensive view, saving time and improving decision-making.

Microsoft-led researchers plan to test the assistant in real-world environments to measure its effect on diagnostic yield and workflow efficiency. They emphasise that success will depend on collaboration among developers, genetic experts, and system designers to build trustworthy and explainable tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s open source AI models now available to all federal departments

The US General Services Administration (GSA) has launched a OneGov initiative with Meta to give federal agencies streamlined access to Llama, its open source AI models. The approach eliminates individual agency negotiations, saving time and reducing duplicated work across departments.

The initiative supports America’s AI Action Plan and federal memoranda, promoting the government’s accelerated and efficient use of AI. Rapid access to Llama aims to boost innovation, governance, public trust, and operational efficiency.

Open source Llama models allow federal teams to maintain complete control over data processing and storage. Agencies can build, deploy, and scale AI applications at lower cost, enhancing public services while delivering value to taxpayers.

Meta’s free access to the models further enables agencies to develop tailored solutions without reliance on proprietary platforms.

Collaboration between GSA and Meta ensures federal requirements are met while providing consistent department access. The arrangement enhances the government’s ability to implement AI while promoting transparency, reproducibility, and flexible mission-specific applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA and OpenAI partner to build 10 gigawatts of AI data centres

OpenAI and NVIDIA have announced a strategic partnership to build at least 10 gigawatts of AI data centres powered by millions of NVIDIA GPUs.

A deal, supported by the investment of up to $100 billion from NVIDIA, that aims to provide the infrastructure for OpenAI’s next generation of models, with the first phase scheduled for late 2026 on the NVIDIA Vera Rubin platform.

The companies said the collaboration will enable the development of AGI and accelerate AI adoption worldwide. OpenAI will treat NVIDIA as its preferred strategic compute and networking partner, coordinating both sides’ hardware and software roadmaps.

They will also continue working with Microsoft, Oracle, SoftBank and other partners to build advanced AI infrastructure.

OpenAI has grown to more than 700 million weekly users across businesses and developers globally. Executives at both firms described the new partnership as the next leap in AI computing power, one intended to fuel innovation at scale instead of incremental improvements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google DeepMind updates AI safety framework for advanced risks

A leading AI developer has released the third iteration of its Frontier Safety Framework (FSF), aiming to identify and mitigate severe risks from advanced AI models. The update expands risk domains and refines the process for assessing potential threats.

Key changes include the introduction of a Critical Capability Level (CCL) focused on harmful manipulation. The update targets AI models with the potential to systematically influence beliefs and behaviours in high-stakes contexts, ensuring safety measures keep pace with growing model capabilities.

The framework also enhances protocols for misalignment risks, addressing scenarios where AI could override operators’ control or shutdown attempts. Safety case reviews are now conducted before external launches and large-scale internal deployments reach critical thresholds.

The updated FSF sharpens risk assessments and applies safety and security mitigations in proportion to threat severity. It reflects a commitment to evidence-based AI governance, expert collaboration, and ensuring AI benefits humanity while minimising risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN reports at a crossroads

As world leaders gather in New York for the UN General Assembly, an unusual but timely question is being raised. In his recent blog, ‘Should the United Nations continue writing reports?’, Jovan Kurbalija argues that while some reports are vital, such as those exposing the role of tech companies in conflict zones, many have become little more than bureaucratic rituals with limited impact.

The UN Secretary-General himself has voiced concerns that the endless production of papers risks overshadowing the organisation’s true mission. The debate reveals two opposing views.

On one side, critics say reports distract from the UN’s core purpose of convening nations, negotiating compromises, and resolving crises. They point to history, such as the failed Treaty of Versailles, to warn that diplomacy loses its strength when buried under data and ‘scientific’ prescriptions.

Reports, they argue, cannot prevent wars or build trust without political will. Worse still, the drafting process is often more about avoiding offence than telling the truth, blurring the line between reporting and negotiation.

Defenders, however, insist that UN reports remain essential. They provide legitimacy, establish a shared baseline of facts, and create informal spaces for diplomacy even before formal talks begin.

During deep geopolitical divides and mistrust in institutions, independent UN analysis could be one of the few remaining tools to anchor global debates. While AI is increasingly capable of churning out facts and summaries, Kurbalija notes that human insight is still needed to read between the lines and grasp nuance.

The way forward, he suggests, is not to abandon reports altogether but to make them fewer, sharper, and more focused on action. Instead of endless PDFs, the UN should channel its energy back into mediation, dialogue, and the intricate craft of diplomacy. In a world drowning in information but starving for wisdom, reports should illuminate choices, not replace the art of negotiation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon outlines responsible AI and global internet plans at UN

Amazon is meeting world leaders at the 80th UN General Assembly to share its vision for responsible AI and global internet access. The company highlighted Project Kuiper’s satellite initiative to provide affordable internet to underserved communities and bridge the digital divide.

The initiative aims to deliver fast, affordable internet to communities without access, boosting education and economic opportunities. Connectivity is presented as essential for participation in the modern economy, as well as for cultural and knowledge exchange across the globe.

Amazon emphasised the development of AI tools that are responsible, inclusive, and designed to enhance human potential. The company aims to make technology accessible, helping small businesses, speeding research, and offering tools once reserved for large organisations.

Collaboration remains central to Amazon’s approach. The company plans to work with governments, the UN, civil society, and other private sector partners to ensure technological advancements benefit humanity while mitigating potential risks.

Discussions at UNGA80 are expected to shape future strategies for innovation, governance, and sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe prepares formal call for AI Gigafactory projects

The European Commission is collaborating with the EU capitals to narrow the list of proposals for large AI training hubs, known as AI Gigafactories. The €20 billion plan will be funded by the Commission (17%), the EU countries (17%), and industry (66%) to boost computing capacity for European developers.

The first call drew 76 proposals from 16 countries, far exceeding the initially planned four or five facilities. Most submissions must be merged or dropped, with Poland already seeking a joint bid with the Baltic states as talks continue.

Some EU members will inevitably lose out, with Ursula von der Leyen, the President of the European Commission, hinting that priority could be given to countries already hosting AI Factories. That could benefit Finland, whose Lumi supercomputer is part of a Nokia-led bid to scale up into a Gigafactory.

The plan has raised concerns that Europe’s efforts come too late, as US tech giants invest heavily in larger AI hubs. Still, Brussels hopes its initiative will allow EU developers to compete globally while maintaining control over critical AI infrastructure.

A formal call for proposals is expected by the end of the year, once the legal framework is finalised. Selection criteria and funding conditions will be set to launch construction as early as 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

BlackRock backs South Korea push to become Asia AI hub

South Korea has secured a significant partnership with BlackRock to accelerate its ambition of becoming Asia’s leading AI hub. The agreement will see the global asset manager join the Ministry of Science and ICT in developing hyperscale AI data centres.

A deal that followed a meeting between President Lee Jae Myung and BlackRock chair Larry Fink, who pledged to attract large-scale international investment into the country’s AI infrastructure.

Although no figures were disclosed, the partnership is expected to focus on meeting rising demand from domestic users and the wider Asia-Pacific region, with renewable energy powering the facilities.

The move comes as Seoul increases national funding for AI, semiconductors and other strategic technologies to KRW150 trillion ($107.7 billion). South Korean companies are also stepping up efforts, with SK Telecom announcing plans to raise AI investment to a third of its revenue over five years.

BlackRock’s involvement signals international confidence in South Korea’s long-term vision to position itself as a regional AI powerhouse and secure a leadership role in next-generation digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Research shows AI complements, not replaces, human work

AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.

Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.

Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.

Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.

The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nissan to launch next-generation ProPILOT in 2027

Nissan has announced plans to launch its next-generation ProPILOT system in fiscal year 2027. The upgraded system will include Nissan Ground Truth Perception, next-generation Lidar, and Wayve AI Driver, enhancing collision avoidance and autonomous driving.

Wayve AI Driver software is built on an embodied AI foundation model that enables human-like decision-making in complex real-world driving conditions. By efficiently learning from large volumes of data, the system continuously enhances Nissan vehicles’ performance and safety.

Wayve, a global AI company, specialises in embodied AI for driving. Its foundation model leverages extensive real-world experience to deliver reliable point-to-point navigation across urban and highway environments, while adapting quickly to new scenarios and platforms.

The partnership positions Nissan at the forefront of autonomous vehicle technology, combining cutting-edge sensors, AI, and adaptive software to redefine safety and efficiency in future mobility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot