Plans to accelerate technological leadership have been outlined by the HM Treasury and the Department for Science, Innovation and Technology, with a £2.5 billion investment targeting AI and quantum computing.
Ambition has been reinforced by Rachel Reeves, who positioned AI as a central driver of economic growth, alongside closer European ties and regional development. Strategy aims to secure the fastest adoption of AI across the G7 while supporting domestic innovation ecosystems.
Significant funding in the UK will be directed towards a Sovereign AI initiative, quantum infrastructure and research capacity. Plans include procurement of large-scale quantum systems and targeted investment in startups, helping companies scale while strengthening national capabilities in advanced technologies.
Expectations surrounding quantum computing are framed as transformative, with potential to reshape industries from healthcare to energy. Combined investment reflects a broader effort to align innovation policy with long-term economic growth and global competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Pentagon is accelerating efforts to replace Anthropic after the company was designated a supply-chain risk, marking a sharp shift in US defence AI strategy. The move follows a breakdown in talks over safeguards governing military use of AI, particularly around surveillance and autonomous weapons.
Cameron Stanley, the Pentagon’s chief digital and AI officer, said engineering work is underway to deploy alternative large language models in government-controlled environments. He indicated that while transitioning from Anthropic’s tools could take more than a month, new systems are expected to be operational soon.
The decision threatens a $200 million contract and could exclude Anthropic from future defence partnerships. The US administration has set a six-month timeline for federal agencies to shift away from the company, signalling a broader push to diversify AI suppliers and reduce dependency risks.
Rival providers are already stepping in. OpenAI and xAI have been approved for classified work, while Google is introducing Gemini AI tools across the Pentagon workforce, initially on unclassified networks before expanding into sensitive environments.
Anthropic has challenged the designation in court, arguing it violates constitutional protections and could harm its business. Despite the legal dispute, defence officials have made clear they are moving forward with an ‘AI-first’ strategy to accelerate the adoption of advanced models across military operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Chey Tae-won warned that the global memory chip shortage could last for years, with structural supply constraints likely to continue into the next decade. Speaking on the sidelines of Nvidia GTC 2026 in San Jose, he said limited wafer capacity remains a key bottleneck for the semiconductor industry.
‘The shortage stems from a lack of wafer capacity, and securing additional wafers takes at least four to five years,’ Chey said. ‘We expect the industry-wide supply shortfall to persist at over 20 percent through 2030.’
He added that SK Hynix is implementing initiatives such as adjusting production schedules and diversifying supplier partnerships to stabilise prices. CEO Kwak Noh-jung is expected to provide further details on these new steps to manage volatility linked to the memory chip shortage.
Despite growing pressure to expand manufacturing overseas, Chey stressed that the group will prioritise domestic production to better respond to the ongoing memory chip shortage. ‘Building capacity outside Korea takes the same amount of time, regardless of location,’ he said. ‘Korea already has the infrastructure in place, allowing for a much faster response.’
He also highlighted the challenges of building fabrication plants abroad, including the need for reliable electricity and water supplies, as well as access to skilled engineering talent.
On competition in the high-bandwidth memory market, Chey noted that rising demand driven by artificial intelligence is reshaping supply dynamics. ‘AI requires graphics processing units (GPUs), and GPUs require HBM. We will do our best,’ he said, while cautioning that excessive focus on HBM could worsen the memory chip shortage for conventional DRAM used in smartphones and personal computers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Concerns are growing over the risks posed by AI chatbots, particularly for minors, as evidence suggests these systems can facilitate harmful behaviour. A recent case in Finland, where a teenager planned a violent attack after interacting with an AI chatbot, has intensified calls for stronger oversight.
A report by the Center for Countering Digital Hate found that most leading AI chatbots assisted when prompted about violent acts. Researchers reported that eight out of ten systems tested generated harmful information or encouraged violence, highlighting gaps in existing safeguards.
The findings have renewed focus on how the Digital Services Act (DSA) could be applied to AI chatbots. Currently, the regulation primarily covers generative AI when integrated into large online platforms, leaving standalone chatbots in a regulatory grey area. Meanwhile, the AI Act focuses on model-level risks rather than user-facing systems.
Experts argue that this split leaves accountability unclear, as chatbot providers can avoid full responsibility by operating between regulatory frameworks. Proposals to delay elements of the AI Act or allow companies to self-assess risk levels have raised concerns about weakening safeguards at a critical moment for AI deployment.
Applying the DSA to chatbots could introduce obligations such as risk assessments, transparency requirements, and protections for minors. In the short term, chatbots could be treated as hosting services, requiring them to remove illegal content and respond to regulatory orders.
However, analysts warn that such measures would not fully address the risks. In the long term, they argue that the EU should create a dedicated regulatory category for AI chatbots, enabling stronger oversight similar to that applied to online platforms.
Stronger enforcement could also address harmful design features, such as systems that encourage prolonged engagement or escalate user prompts. Measures targeting manipulative interfaces and improving safeguards for minors could reduce the likelihood of harmful interactions.
As AI chatbots become more widely used for information, communication, and decision-making, policymakers face increasing pressure to act. Calls are growing for the EU to enforce existing rules while adapting its legal framework to ensure accountability keeps pace with technological change.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Digital technologies and AI are increasingly shaping economic development, governance and international cooperation. As these technologies expand rapidly, international organisations are working to ensure that innovation is accompanied by responsible governance, inclusive access and coordinated global policies.
Within the United Nations system, a range of initiatives aim to strengthen cooperation on digital transformation and the development of AI. These efforts address issues such as digital infrastructure, data governance, technological innovation and equitable participation in emerging digital ecosystems. International collaboration plays an essential role in ensuring that the benefits of digital technologies support sustainable development while reducing global inequalities in access to digital resources.
Several programmes across the United Nations system reflect these priorities, combining global governance initiatives with practical AI applications in areas such as development, humanitarian response and digital inclusion. The following sections examine selected initiatives that illustrate how AI and digital cooperation are being advanced across different areas of the UN system.
Global Digital Compact
The Global Digital Compact is a comprehensive international framework adopted by United Nations member states to guide global digital cooperation and enhance the governance of AI. Negotiated by the 193 member states and reflects broad consultations aimed at shaping a shared vision for a digital future that is open, inclusive, safe, and secure for all. The Compact is part of the Pact for the Future, adopted at the 2024 Summit of the Future in New York.
At its core, the Compact seeks to address persistent digital divides by promoting universal connectivity, affordable access and inclusive participation in the digital economy. Governments and stakeholders have committed to connecting all individuals, schools, and hospitals to the internet, increasing investment in digital public infrastructure, and ensuring that technologies are accessible in diverse languages and formats.
The Compact also emphasises human rights and the protection of fundamental freedoms in the digital space, calling for the strengthened legal and policy frameworks that uphold international law and protect users from harms such as misinformation and discrimination. It promotes an open, global, stable, and secure internet while supporting access to independent, fact-based information.
The key objective of the Compact is to enhance international cooperation on data governance and AI for the benefit of humanity. It includes commitments to develop interoperable national data governance frameworks, advance responsible and equitable approaches to AI governance, and establish mechanisms for global dialogue and scientific guidance on AI. These elements reflect the need for collaborative, multistakeholder governance that balances innovation with transparency, accountability, and respect for human rights.
Independent International Scientific Panel on AI
The Independent International Scientific Panel on AI is a mechanism called for within the Global Digital Compact to support evidence‑based policymaking in AI governance. Member states requested the establishment of a multi‑disciplinary panel under the United Nations to assess the opportunities, risks and societal impacts of AI, and to promote scientific understanding across geographic and sectoral divides.
The panel is intended to contribute robust, independent scientific analysis to global AI discussions, ensuring that policy decisions are grounded in research rather than short‑term market pressures or fragmented national approaches. Its mandate includes conducting comprehensive risk and impact assessments, developing common methodologies for evaluating AI systems, and advising on interoperable governance frameworks that respect human rights and international law.
By bringing together experts from diverse disciplines and regions, the panel aims to bridge the gap between scientific developments and policymaking. It is a key institutional mechanism for fostering inclusive AI governance, with balanced geographic representation to ensure that insights reflect global needs rather than narrow technological interests.
The panel also complements the broader Global Dialogue on AI Governance, which seeks to engage governments, international organisations, civil society and technical communities in ongoing discussions about normative approaches, standards, and principles for global AI governance.
The UN Digital Cooperation Portal
The UN Digital Cooperation Portal is a central platform designed to support the implementation of the Global Digital Compact by mapping global digital cooperation activities and facilitating coordination among diverse stakeholders. The portal invites governments, UN entities, civil society organisations, researchers, and private sector actors to voluntarily submit information on initiatives related to the Compact’s objectives.
Launched in December 2025, the portal aggregates initiatives across thematic areas, including digital inclusion, AI governance, data governance, digital infrastructure, and the protection of human rights online. By visualising how activities align with agreed international frameworks, the platform supports strategic collaboration, strengthens transparency and highlights opportunities for joint action across regions and sectors.
The portal generates interactive data visualisations that illustrate how digital cooperation initiatives are evolving at the national, regional and global levels. These tools help identify gaps and overlaps in current efforts, enabling stakeholders to coordinate more effectively in pursuit of shared objectives such as closing digital divides and advancing equitable digital development.
As a resource for governments, UN agencies and external partners, the portal also contributes to the preparatory process for the high‑level review of the Global Digital Compact scheduled for 2027, providing an evidence‑based foundation assessing progress and emerging policy priorities.
Closing the language gap in AI through local language accelerators
Language diversity remains one of the major challenges in global AI development. More than half of the world’s population speaks one of over seven thousand languages, yet most AI systems currently support only a small number of widely used global languages.
Around 1.2 billion people rely on low-resource languages that remain poorly represented in digital technologies. Limited language representation can restrict access to AI-powered services in sectors such as agriculture, healthcare, education and civic participation.
The initiative combines technological development with partnerships involving universities, research institutions and local language communities. The technologies involved include optical character recognition systems that digitise written texts, automatic speech recognition tools capable of processing spoken language and text-to-speech technologies that generate digital audio.
Using satellite imagery and AI to improve disaster response
Rapid damage assessment plays a critical role in humanitarian response following natural disasters. Traditional assessment methods often require manual analysis of satellite images and field inspections conducted by experts, a process that can take weeks.
Emergency response operations, however, require reliable information within the first seventy-two hours after a disaster to prioritise rescue operations and humanitarian assistance.
The SKAI platform, developed by the World Food ProgrammeInnovation Accelerator, uses AI-based computer vision to analyse satellite imagery and identify damaged buildings automatically. The system enables humanitarian organisations to assess destruction at the level of individual structures across large geographic areas.
Developed as an open-source project in collaboration with Google Research, the platform can generate prioritised damage assessments within approximately twenty-four hours. Since 2022, the system has analysed more than 3.9 million buildings and identified around 450,000 severely damaged or destroyed structures.
Expanding inclusive participation through the UN Women AI School
Increasing participation in AI development is another priority across the United Nations system. Women remain underrepresented in many AI-related fields, including machine learning engineering and data science.
The UN WomenAI School addresses this challenge by providing training programmes designed for policymakers, civil society organisations, UN staff, and young innovators. The initiative aims to strengthen AI literacy and encourage broader participation in shaping the future of digital technologies.
Participants follow structured training tracks combining technical education with discussions on AI governance, ethics, and social impact. Collaborative learning environments encourage participants to develop solutions tailored to the needs of their communities.
More than three thousand participants have taken part in the programme since its launch. A train-the-trainer (ToT) model enables graduates to support future training programmes and expand the initiative to additional regions.
Responsible AI in satellite technologies and earth observation
AI technologies are increasingly integrated into satellite systems and Earth observation platforms. These systems analyse large volumes of geospatial data and generate near-real-time insights about environmental conditions.
Applications include monitoring climate change, analysing natural disasters, and supporting environmental policy planning. Rapid technological progress in this field also raises governance challenges related to transparency and accountability.
Many AI models used in satellite analysis operate as black box systems whose internal decision-making processes are difficult to interpret. Limited transparency can create risks when such systems are used to inform critical policy decisions.
Data bias represents another concern. Training datasets often originate primarily from the Global North, which may lead to inaccurate interpretations of environmental conditions in other regions of the world.
The methodology examines multiple dimensions of national AI ecosystems, including infrastructure, research capacity, institutional readiness and regulatory frameworks. Rather than ranking countries, the assessment identifies strengths and areas requiring further development.
Since its introduction in 2022, the methodology has been implemented in more than seventy countries. More than seventeen thousand stakeholders have participated in consultations associated with the initiative.
Assessment results have contributed to the development of national AI strategies and policy frameworks in several regions. An updated version of the methodology is expected to be released in 2026.
Additionally, UNESCO promotes the ethical development and use of AI through its Recommendation on the Ethics of Artificial Intelligence. The global framework sets out principles on transparency, accountability, fairness, and respect for human rights to guide national policies and international cooperation.
AI for Good and global capacity building
The International Telecommunication Union coordinates the AI for Good initiative, which focuses on applying AI technologies to global challenges while strengthening international cooperation in governance and standards.
The programme operates across multiple areas, including multistakeholder dialogue, technical standard development, governance support and capacity development activities.
More than four hundred AI-related standards have already been developed in areas such as multimedia technologies, energy efficiency and cybersecurity. Governance dialogues organised through the initiative have involved more than one hundred ministers and regulators.
Educational programmes linked to the initiative aim to expand digital skills among young people worldwide through robotics competitions, machine learning challenges and educational partnerships.
The AI for Good Global Summit 2026, set to take place from 7–10 July in Geneva, will convene governments, industry leaders and civil society to advance AI governance, promote responsible innovation, and highlight initiatives that foster inclusive and equitable digital development.
AI tools supporting refugee entrepreneurship
AI technologies are also being used to support the economic opportunities for displaced populations. The United Nations Refugee Agency has developed an AI-powered virtual assistant designed to help refugees and asylum seekers transform business ideas into structured business plans.
The platform guides users through financial planning, market analysis and the preparation of investment proposals. The development of the system involved collaboration with NGOs, governments, and entrepreneurial networks across Latin America.
The tool was initially implemented in Paraguay and was designed with input from refugee communities. Remote access allows users to engage with the platform regardless of geographical or institutional constraints.
More than 340 refugee entrepreneurs have used the platform since its launch, with women representing approximately sixty percent of participants. The model is designed to be scalable and could be implemented in additional regions.
Promoting responsible innovation in civilian AI for peace and security
The rapid expansion of AI technologies brings increasing security challenges, particularly due to the potential misuse of civilian AI systems in military, conflict-related, or high-risk contexts. Dual-use applications mean that tools designed for civilian purposes, such as data analysis or autonomous systems, could also be repurposed in ways that threaten international peace, stability or human safety.
The United Nations Office for Disarmament Affairs works to foster responsible innovation practices, ensuring that the development and deployment of AI technologies consider their broader implications for global peace and security. Addressing these risks requires ongoing collaboration and dialogue among policymakers, researchers, industry stakeholders, and civil society, creating a shared framework for understanding and mitigating potential threats.
To support this, the programme organises a comprehensive set of initiatives, including thematic multistakeholder dialogues, academic workshops, public panels, private sector roundtables and in-person training sessions for graduate students. These activities aim not only to raise awareness of emerging security risks, but also to provide practical guidance and tools that promote safe, transparent and accountable AI practices in civilian applications worldwide.
UN 2.0 Communities of Practice
Knowledge sharing and collaboration are strengthened through UN 2.0 Communities of Practice, connecting partners across the United Nations system and beyond. The networks facilitate the exchange of expertise and approaches on digital transformation, data strategy, innovation, and strategic foresight.
Over 18,000 practitioners from more than 160 countries participate, enhancing the collective capacity to address complex AI and digital challenges. Thematic groups, including those focused on digital and data initiatives, support peer-to-peer engagement, professional development, and collaborative problem-solving. Participation allows stakeholders to contribute to a wider ecosystem of expertise and innovation, promoting inclusive digital governance and supporting the Sustainable Development Goals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Three Democratic senators have raised concerns about Meta’s reported exploration of facial recognition in its smart glasses, warning that it could normalise public surveillance. In a letter to CEO Mark Zuckerberg, Senators Edward Markey, Ron Wyden, and Jeff Merkley asked about consent, biometric data, and the risks of misuse.
The lawmakers said the proposed feature ‘risks normalising mass surveillance at a moment when the federal government is using similar tools to intimidate protesters and chill speech. Although facial recognition may offer real benefits for blind and visually impaired users, Meta’s history of failing to protect user privacy raises serious questions about its plan to deploy this technology in its smart glasses.’
‘Americans do not consent to biometric data collection simply by walking down a public street, entering a café, or standing in a crowd,’ the senators added. ‘Yet, the deployment of this technology would appear to do exactly that – subjecting countless individuals to covert identification without notice, without consent, and without any meaningful opportunity to opt out.’ They warned that such practices would erode longstanding expectations of privacy in public spaces, effectively eliminating public anonymity.’
Concerns grew after reports of US Border Patrol and ICE agents using Meta smart glasses. While there is no evidence of facial recognition use, senators argue that adding identification tools to eyewear could expand undetectable surveillance. The letter questions if Meta might link facial data with information from its platforms, enabling real-time identification tied to profiles. Lawmakers warn that this could increase the risks of harassment and targeting.
Meta had previously discontinued facial recognition on Facebook in 2021, citing societal concerns. The senators argue that reintroducing similar technology in wearable devices suggests a shift rather than a retreat. ‘Five years later, Meta appears less worried about those societal concerns and is reportedly planning to deploy facial recognition technology in one of the most dangerous possible settings,’ they wrote.
‘Moreover,’ they continued, ‘Meta is apparently aware of the risks with this technology,’ noting that ‘an internal memo recommended launching the product ‘during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.’
‘In other words,’ the senators added, ‘Meta appears to recognise the serious privacy and civil liberties risks of facial recognition but thinks it can avoid attention by slipping the once-abandoned, ethically fraught product back onto the market while the world is distracted by the Trump administration’s daily chaos.’
The senators have asked Meta to clarify how it would obtain consent from both users and bystanders, how long it would retain biometric data, whether it would use it to train AI models, and whether it could share it with law enforcement, including the Department of Homeland Security. The company has been given until 6 April to respond.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The National Security Agency has released new guidance on managing risks across the AI supply chain, highlighting growing cybersecurity concerns tied to AI and machine learning systems. The joint information sheet outlines how organisations can better assess vulnerabilities when deploying or sourcing AI technologies.
The document defines the AI and machine learning supply chain as a combination of key components, including training data, models, software, infrastructure, hardware, and third-party services. Each element can introduce risks affecting confidentiality, integrity, or availability, particularly as advanced tools such as large language models and AI agents become more widely adopted.
Security risks associated with data include bias, poisoning attacks, and exposure via techniques such as model inversion and data extraction. For models, the guidance warns of hidden backdoors, malware, evasion attacks, and model manipulation. Organisations are advised to use trusted sources, perform integrity checks, and maintain verified model registries to mitigate such threats.
The paper also highlights software and infrastructure vulnerabilities, noting that AI systems often rely on complex dependencies that expand the attack surface. Recommended measures include malware scanning, testing, patching, and maintaining software bills of materials. Additional risks arise from third-party services, which may introduce weaknesses through their own supply chains or shared environments.
To manage these risks, organisations are urged to improve visibility across their AI ecosystems, identify suppliers and subcontractors, and require documentation such as AI and software bills of materials. The guidance aligns with frameworks from the National Institute of Standards and Technology and MITRE, reinforcing the need for coordinated approaches to AI supply chain security.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI systems are increasingly being tested on advanced mathematical problems as researchers assess their reasoning abilities. Competitions such as the Putnam exam have become benchmarks for evaluating performance.
Recent results suggest some AI models can achieve scores comparable to top human participants, whilst other tests face scrutiny. Experts caution that such tests may not reflect real-world mathematical research or practical problem-solving.
Researchers have also explored AI-generated proofs for longstanding mathematical questions. Verification tools are being used to confirm results and reduce errors often produced by AI systems.
Mathematicians say AI can support brainstorming and research, but still requires human oversight. Analysts describe performance as uneven, with strong results in some areas and clear limitations in others.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has addressed an Exchange Online outage that disrupted access to email and calendar services for users worldwide. The issue affected multiple connection methods, including Outlook on the web, Outlook desktop, and Exchange ActiveSync.
The company first acknowledged the problem early in the day, saying it was investigating reports of users being unable to access their mailboxes. According to a Microsoft 365 admin centre update, several Exchange Online connection protocols were impacted during the outage.
Although Microsoft later reported that telemetry indicated the issue was no longer occurring for most users, some customers continued to experience access problems. At one point, the Office.com portal also displayed an error message, preventing users from logging in.
Microsoft linked the disruption to an issue within its supporting network infrastructure, which affected how traffic was processed. Engineers implemented configuration changes to restore normal service and continue monitoring the platform to ensure stability.
In a later update, Microsoft confirmed that the Exchange Online outage had been mitigated and that services had been restored. The company said it is still investigating the root cause and will provide further details in a post-incident report, while a separate issue affecting Microsoft 365 Copilot web access remains under review.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
NVIDIA has unveiled the Vera CPU, designed specifically for agentic AI and reinforcement learning. It delivers 50% faster performance and double the energy efficiency, already adopted by Alibaba, Meta, ByteDance, Oracle Cloud, CoreWeave, and Lambda.
Vera features 88 Olympus cores, high-bandwidth memory, and advanced multithreading, supporting large-scale AI deployments. Liquid-cooled racks can host over 22,500 concurrent CPU environments, allowing enterprises and research labs to scale agentic AI efficiently.
The CPU integrates with NVIDIA GPUs via NVLink-C2C and includes ConnectX SuperNIC and BlueField-4 DPUs to enhance networking, storage, and security. Early users like Cursor and Redpanda report major gains in AI agent throughput and real-time data processing.
High performance, energy efficiency, and GPU integration make Vera a new standard for faster, scalable, and responsive AI systems. The platform supports coding assistants, reinforcement learning, and large-scale data, making it suitable for enterprise and scientific use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!