Plans to accelerate technological leadership have been outlined by the HM Treasury and the Department for Science, Innovation and Technology, with a £2.5 billion investment targeting AI and quantum computing.
Ambition has been reinforced by Rachel Reeves, who positioned AI as a central driver of economic growth, alongside closer European ties and regional development. Strategy aims to secure the fastest adoption of AI across the G7 while supporting domestic innovation ecosystems.
Significant funding in the UK will be directed towards a Sovereign AI initiative, quantum infrastructure and research capacity. Plans include procurement of large-scale quantum systems and targeted investment in startups, helping companies scale while strengthening national capabilities in advanced technologies.
Expectations surrounding quantum computing are framed as transformative, with potential to reshape industries from healthcare to energy. Combined investment reflects a broader effort to align innovation policy with long-term economic growth and global competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Pentagon is accelerating efforts to replace Anthropic after the company was designated a supply-chain risk, marking a sharp shift in US defence AI strategy. The move follows a breakdown in talks over safeguards governing military use of AI, particularly around surveillance and autonomous weapons.
Cameron Stanley, the Pentagon’s chief digital and AI officer, said engineering work is underway to deploy alternative large language models in government-controlled environments. He indicated that while transitioning from Anthropic’s tools could take more than a month, new systems are expected to be operational soon.
The decision threatens a $200 million contract and could exclude Anthropic from future defence partnerships. The US administration has set a six-month timeline for federal agencies to shift away from the company, signalling a broader push to diversify AI suppliers and reduce dependency risks.
Rival providers are already stepping in. OpenAI and xAI have been approved for classified work, while Google is introducing Gemini AI tools across the Pentagon workforce, initially on unclassified networks before expanding into sensitive environments.
Anthropic has challenged the designation in court, arguing it violates constitutional protections and could harm its business. Despite the legal dispute, defence officials have made clear they are moving forward with an ‘AI-first’ strategy to accelerate the adoption of advanced models across military operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Chey Tae-won warned that the global memory chip shortage could last for years, with structural supply constraints likely to continue into the next decade. Speaking on the sidelines of Nvidia GTC 2026 in San Jose, he said limited wafer capacity remains a key bottleneck for the semiconductor industry.
‘The shortage stems from a lack of wafer capacity, and securing additional wafers takes at least four to five years,’ Chey said. ‘We expect the industry-wide supply shortfall to persist at over 20 percent through 2030.’
He added that SK Hynix is implementing initiatives such as adjusting production schedules and diversifying supplier partnerships to stabilise prices. CEO Kwak Noh-jung is expected to provide further details on these new steps to manage volatility linked to the memory chip shortage.
Despite growing pressure to expand manufacturing overseas, Chey stressed that the group will prioritise domestic production to better respond to the ongoing memory chip shortage. ‘Building capacity outside Korea takes the same amount of time, regardless of location,’ he said. ‘Korea already has the infrastructure in place, allowing for a much faster response.’
He also highlighted the challenges of building fabrication plants abroad, including the need for reliable electricity and water supplies, as well as access to skilled engineering talent.
On competition in the high-bandwidth memory market, Chey noted that rising demand driven by artificial intelligence is reshaping supply dynamics. ‘AI requires graphics processing units (GPUs), and GPUs require HBM. We will do our best,’ he said, while cautioning that excessive focus on HBM could worsen the memory chip shortage for conventional DRAM used in smartphones and personal computers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The University of Cambridge has partnered with the UK Atomic Energy Authority and the Department for Energy Security and Net Zero to deploy a major AI supercomputer for fusion energy. The system, named ‘Sunrise’, is designed to accelerate research into clean and sustainable power.
Developed with support from Dell Technologies, AMD and StackHPC, the GPU-powered machine will operate at 1.4MW capacity. Project marks a significant step in strengthening the UK’s sovereign computing capabilities while supporting the Culham AI Growth Zone initiative.
Focus will centre on solving complex fusion challenges, including plasma turbulence, advanced materials, and fuel development. Advanced simulations and AI modelling are expected to play a key role in bringing fusion energy closer to commercial viability.
Plans aim to support the UK’s long-term goal of delivering fusion power to the national grid in the 2040s. Sunrise is scheduled to become operational in June, forming part of a broader national strategy to expand AI infrastructure and scientific innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major cyber incident has impacted Stryker Corporation, where attackers targeted its internal Microsoft environment and remotely wiped tens of thousands of employee devices without deploying traditional malware.
Access to systems was reportedly achieved through a compromised administrator account, allowing attackers to issue remote wipe commands via Microsoft Intune.
As a result, large parts of the company’s internal infrastructure were disrupted, with some services remaining offline and business operations affected.
Responsibility has been claimed by Handala, a group often associated with broader geopolitical cyber activity. The incident reflects a growing trend of cyber operations blending disruption, data theft and strategic messaging.
Despite the scale of the attack, the company confirmed that its medical devices and patient-facing technologies were not impacted.
The case highlights increasing risks linked to identity compromise and cloud-based management tools, where attackers can cause significant damage without relying on conventional malware techniques.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Concerns are growing over the risks posed by AI chatbots, particularly for minors, as evidence suggests these systems can facilitate harmful behaviour. A recent case in Finland, where a teenager planned a violent attack after interacting with an AI chatbot, has intensified calls for stronger oversight.
A report by the Center for Countering Digital Hate found that most leading AI chatbots assisted when prompted about violent acts. Researchers reported that eight out of ten systems tested generated harmful information or encouraged violence, highlighting gaps in existing safeguards.
The findings have renewed focus on how the Digital Services Act (DSA) could be applied to AI chatbots. Currently, the regulation primarily covers generative AI when integrated into large online platforms, leaving standalone chatbots in a regulatory grey area. Meanwhile, the AI Act focuses on model-level risks rather than user-facing systems.
Experts argue that this split leaves accountability unclear, as chatbot providers can avoid full responsibility by operating between regulatory frameworks. Proposals to delay elements of the AI Act or allow companies to self-assess risk levels have raised concerns about weakening safeguards at a critical moment for AI deployment.
Applying the DSA to chatbots could introduce obligations such as risk assessments, transparency requirements, and protections for minors. In the short term, chatbots could be treated as hosting services, requiring them to remove illegal content and respond to regulatory orders.
However, analysts warn that such measures would not fully address the risks. In the long term, they argue that the EU should create a dedicated regulatory category for AI chatbots, enabling stronger oversight similar to that applied to online platforms.
Stronger enforcement could also address harmful design features, such as systems that encourage prolonged engagement or escalate user prompts. Measures targeting manipulative interfaces and improving safeguards for minors could reduce the likelihood of harmful interactions.
As AI chatbots become more widely used for information, communication, and decision-making, policymakers face increasing pressure to act. Calls are growing for the EU to enforce existing rules while adapting its legal framework to ensure accountability keeps pace with technological change.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Digital technologies and AI are increasingly shaping economic development, governance and international cooperation. As these technologies expand rapidly, international organisations are working to ensure that innovation is accompanied by responsible governance, inclusive access and coordinated global policies.
Within the United Nations system, a range of initiatives aim to strengthen cooperation on digital transformation and the development of AI. These efforts address issues such as digital infrastructure, data governance, technological innovation and equitable participation in emerging digital ecosystems. International collaboration plays an essential role in ensuring that the benefits of digital technologies support sustainable development while reducing global inequalities in access to digital resources.
Several programmes across the United Nations system reflect these priorities, combining global governance initiatives with practical AI applications in areas such as development, humanitarian response and digital inclusion. The following sections examine selected initiatives that illustrate how AI and digital cooperation are being advanced across different areas of the UN system.
Global Digital Compact
The Global Digital Compact is a comprehensive international framework adopted by United Nations member states to guide global digital cooperation and enhance the governance of AI. Negotiated by the 193 member states and reflects broad consultations aimed at shaping a shared vision for a digital future that is open, inclusive, safe, and secure for all. The Compact is part of the Pact for the Future, adopted at the 2024 Summit of the Future in New York.
At its core, the Compact seeks to address persistent digital divides by promoting universal connectivity, affordable access and inclusive participation in the digital economy. Governments and stakeholders have committed to connecting all individuals, schools, and hospitals to the internet, increasing investment in digital public infrastructure, and ensuring that technologies are accessible in diverse languages and formats.
The Compact also emphasises human rights and the protection of fundamental freedoms in the digital space, calling for the strengthened legal and policy frameworks that uphold international law and protect users from harms such as misinformation and discrimination. It promotes an open, global, stable, and secure internet while supporting access to independent, fact-based information.
The key objective of the Compact is to enhance international cooperation on data governance and AI for the benefit of humanity. It includes commitments to develop interoperable national data governance frameworks, advance responsible and equitable approaches to AI governance, and establish mechanisms for global dialogue and scientific guidance on AI. These elements reflect the need for collaborative, multistakeholder governance that balances innovation with transparency, accountability, and respect for human rights.
Independent International Scientific Panel on AI
The Independent International Scientific Panel on AI is a mechanism called for within the Global Digital Compact to support evidence‑based policymaking in AI governance. Member states requested the establishment of a multi‑disciplinary panel under the United Nations to assess the opportunities, risks and societal impacts of AI, and to promote scientific understanding across geographic and sectoral divides.
The panel is intended to contribute robust, independent scientific analysis to global AI discussions, ensuring that policy decisions are grounded in research rather than short‑term market pressures or fragmented national approaches. Its mandate includes conducting comprehensive risk and impact assessments, developing common methodologies for evaluating AI systems, and advising on interoperable governance frameworks that respect human rights and international law.
By bringing together experts from diverse disciplines and regions, the panel aims to bridge the gap between scientific developments and policymaking. It is a key institutional mechanism for fostering inclusive AI governance, with balanced geographic representation to ensure that insights reflect global needs rather than narrow technological interests.
The panel also complements the broader Global Dialogue on AI Governance, which seeks to engage governments, international organisations, civil society and technical communities in ongoing discussions about normative approaches, standards, and principles for global AI governance.
The UN Digital Cooperation Portal
The UN Digital Cooperation Portal is a central platform designed to support the implementation of the Global Digital Compact by mapping global digital cooperation activities and facilitating coordination among diverse stakeholders. The portal invites governments, UN entities, civil society organisations, researchers, and private sector actors to voluntarily submit information on initiatives related to the Compact’s objectives.
Launched in December 2025, the portal aggregates initiatives across thematic areas, including digital inclusion, AI governance, data governance, digital infrastructure, and the protection of human rights online. By visualising how activities align with agreed international frameworks, the platform supports strategic collaboration, strengthens transparency and highlights opportunities for joint action across regions and sectors.
The portal generates interactive data visualisations that illustrate how digital cooperation initiatives are evolving at the national, regional and global levels. These tools help identify gaps and overlaps in current efforts, enabling stakeholders to coordinate more effectively in pursuit of shared objectives such as closing digital divides and advancing equitable digital development.
As a resource for governments, UN agencies and external partners, the portal also contributes to the preparatory process for the high‑level review of the Global Digital Compact scheduled for 2027, providing an evidence‑based foundation assessing progress and emerging policy priorities.
Closing the language gap in AI through local language accelerators
Language diversity remains one of the major challenges in global AI development. More than half of the world’s population speaks one of over seven thousand languages, yet most AI systems currently support only a small number of widely used global languages.
Around 1.2 billion people rely on low-resource languages that remain poorly represented in digital technologies. Limited language representation can restrict access to AI-powered services in sectors such as agriculture, healthcare, education and civic participation.
The initiative combines technological development with partnerships involving universities, research institutions and local language communities. The technologies involved include optical character recognition systems that digitise written texts, automatic speech recognition tools capable of processing spoken language and text-to-speech technologies that generate digital audio.
Using satellite imagery and AI to improve disaster response
Rapid damage assessment plays a critical role in humanitarian response following natural disasters. Traditional assessment methods often require manual analysis of satellite images and field inspections conducted by experts, a process that can take weeks.
Emergency response operations, however, require reliable information within the first seventy-two hours after a disaster to prioritise rescue operations and humanitarian assistance.
The SKAI platform, developed by the World Food ProgrammeInnovation Accelerator, uses AI-based computer vision to analyse satellite imagery and identify damaged buildings automatically. The system enables humanitarian organisations to assess destruction at the level of individual structures across large geographic areas.
Developed as an open-source project in collaboration with Google Research, the platform can generate prioritised damage assessments within approximately twenty-four hours. Since 2022, the system has analysed more than 3.9 million buildings and identified around 450,000 severely damaged or destroyed structures.
Expanding inclusive participation through the UN Women AI School
Increasing participation in AI development is another priority across the United Nations system. Women remain underrepresented in many AI-related fields, including machine learning engineering and data science.
The UN WomenAI School addresses this challenge by providing training programmes designed for policymakers, civil society organisations, UN staff, and young innovators. The initiative aims to strengthen AI literacy and encourage broader participation in shaping the future of digital technologies.
Participants follow structured training tracks combining technical education with discussions on AI governance, ethics, and social impact. Collaborative learning environments encourage participants to develop solutions tailored to the needs of their communities.
More than three thousand participants have taken part in the programme since its launch. A train-the-trainer (ToT) model enables graduates to support future training programmes and expand the initiative to additional regions.
Responsible AI in satellite technologies and earth observation
AI technologies are increasingly integrated into satellite systems and Earth observation platforms. These systems analyse large volumes of geospatial data and generate near-real-time insights about environmental conditions.
Applications include monitoring climate change, analysing natural disasters, and supporting environmental policy planning. Rapid technological progress in this field also raises governance challenges related to transparency and accountability.
Many AI models used in satellite analysis operate as black box systems whose internal decision-making processes are difficult to interpret. Limited transparency can create risks when such systems are used to inform critical policy decisions.
Data bias represents another concern. Training datasets often originate primarily from the Global North, which may lead to inaccurate interpretations of environmental conditions in other regions of the world.
The methodology examines multiple dimensions of national AI ecosystems, including infrastructure, research capacity, institutional readiness and regulatory frameworks. Rather than ranking countries, the assessment identifies strengths and areas requiring further development.
Since its introduction in 2022, the methodology has been implemented in more than seventy countries. More than seventeen thousand stakeholders have participated in consultations associated with the initiative.
Assessment results have contributed to the development of national AI strategies and policy frameworks in several regions. An updated version of the methodology is expected to be released in 2026.
Additionally, UNESCO promotes the ethical development and use of AI through its Recommendation on the Ethics of Artificial Intelligence. The global framework sets out principles on transparency, accountability, fairness, and respect for human rights to guide national policies and international cooperation.
AI for Good and global capacity building
The International Telecommunication Union coordinates the AI for Good initiative, which focuses on applying AI technologies to global challenges while strengthening international cooperation in governance and standards.
The programme operates across multiple areas, including multistakeholder dialogue, technical standard development, governance support and capacity development activities.
More than four hundred AI-related standards have already been developed in areas such as multimedia technologies, energy efficiency and cybersecurity. Governance dialogues organised through the initiative have involved more than one hundred ministers and regulators.
Educational programmes linked to the initiative aim to expand digital skills among young people worldwide through robotics competitions, machine learning challenges and educational partnerships.
The AI for Good Global Summit 2026, set to take place from 7–10 July in Geneva, will convene governments, industry leaders and civil society to advance AI governance, promote responsible innovation, and highlight initiatives that foster inclusive and equitable digital development.
AI tools supporting refugee entrepreneurship
AI technologies are also being used to support the economic opportunities for displaced populations. The United Nations Refugee Agency has developed an AI-powered virtual assistant designed to help refugees and asylum seekers transform business ideas into structured business plans.
The platform guides users through financial planning, market analysis and the preparation of investment proposals. The development of the system involved collaboration with NGOs, governments, and entrepreneurial networks across Latin America.
The tool was initially implemented in Paraguay and was designed with input from refugee communities. Remote access allows users to engage with the platform regardless of geographical or institutional constraints.
More than 340 refugee entrepreneurs have used the platform since its launch, with women representing approximately sixty percent of participants. The model is designed to be scalable and could be implemented in additional regions.
Promoting responsible innovation in civilian AI for peace and security
The rapid expansion of AI technologies brings increasing security challenges, particularly due to the potential misuse of civilian AI systems in military, conflict-related, or high-risk contexts. Dual-use applications mean that tools designed for civilian purposes, such as data analysis or autonomous systems, could also be repurposed in ways that threaten international peace, stability or human safety.
The United Nations Office for Disarmament Affairs works to foster responsible innovation practices, ensuring that the development and deployment of AI technologies consider their broader implications for global peace and security. Addressing these risks requires ongoing collaboration and dialogue among policymakers, researchers, industry stakeholders, and civil society, creating a shared framework for understanding and mitigating potential threats.
To support this, the programme organises a comprehensive set of initiatives, including thematic multistakeholder dialogues, academic workshops, public panels, private sector roundtables and in-person training sessions for graduate students. These activities aim not only to raise awareness of emerging security risks, but also to provide practical guidance and tools that promote safe, transparent and accountable AI practices in civilian applications worldwide.
UN 2.0 Communities of Practice
Knowledge sharing and collaboration are strengthened through UN 2.0 Communities of Practice, connecting partners across the United Nations system and beyond. The networks facilitate the exchange of expertise and approaches on digital transformation, data strategy, innovation, and strategic foresight.
Over 18,000 practitioners from more than 160 countries participate, enhancing the collective capacity to address complex AI and digital challenges. Thematic groups, including those focused on digital and data initiatives, support peer-to-peer engagement, professional development, and collaborative problem-solving. Participation allows stakeholders to contribute to a wider ecosystem of expertise and innovation, promoting inclusive digital governance and supporting the Sustainable Development Goals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Concerns over data protection have intensified as the European Commission calls on major technology companies to apply the EU standards when handling sensitive staff information linked to digital regulation.
Pressure follows requests from the US House Judiciary Committee seeking access to communications between US firms and the EU officials involved in enforcing laws such as the Digital Services Act and Digital Markets Act.
The EU officials emphasise that formal exchanges with companies take place through official channels, including documented correspondence, rather than informal messaging platforms. Internal communication practices may involve encrypted tools, reflecting growing concerns about data security and external scrutiny.
Debate surrounding the issue reflects wider tensions between the EU and the US over digital governance, privacy protections and regulatory authority. Questions over jurisdiction and access to sensitive communications are likely to remain central as transatlantic tech policy evolves.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Debate over potential updates to the GDPR is intensifying, as Marina Kaljurand advocates a focused ‘fitness check’ rather than sweeping legislative changes in an omnibus package.
Concerns raised in the European Parliament highlight risks associated with altering foundational elements of the regulation, particularly its definitions to personal data. Preserving these core principles is seen as essential to maintaining the integrity of the EU’s data protection framework.
Ongoing discussions reflect broader policy tensions within the EU, where efforts to reduce regulatory complexity must be balanced against the need to uphold strong privacy safeguards. Proposals for simplification are therefore facing scrutiny from lawmakers prioritising stability and legal clarity.
Future developments are likely to shape how the EU adapts its data protection rules to evolving digital markets, while ensuring that existing protections remain effective in a rapidly changing technological environment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Efforts to improve the security of Europe’s digital infrastructure have advanced as the European Commission opens a €180 million funding call to support backup systems for subsea internet cables.
Investment by the EU will focus on developing alternative routes and redundancy mechanisms, ensuring continuity of connectivity in the event of disruptions affecting critical undersea networks that carry global data traffic.
Growing concerns around infrastructure vulnerability have increased attention on subsea cables, which play a central role in international communications. Strengthening resilience is therefore becoming a priority within broader European strategies on technological sovereignty and security.
Planned projects are expected to enhance reliability across the region, reducing risks associated with outages or potential external threats to essential telecommunications infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!