Broadcast Engineering Consultants India Limited (BECIL) and the Centre for Development of Advanced Computing (C-DAC) have signed a Memorandum of Understanding to collaborate on advanced technologies and digital transformation. The agreement focuses on joint projects, consultancy, and technical support across sectors.
The partnership covers AI, machine learning, Internet of Things, cybersecurity, 5G, and cloud computing. It also includes the development of turnkey solutions, technology transfer, and the commercialisation of innovative products.
Capacity development is a key component of the collaboration. Both organisations will support workforce upskilling and skill development to strengthen technical capabilities.
Officials stated that the partnership aims to leverage complementary strengths to deliver technology solutions. It is also expected to support innovation and contribute to India’s broader digital development objectives.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
According to the Stanford Institute for Human-Centered AI‘s AI Index 2026 report, South Korea leads globally in AI patents per capita, reflecting a high concentration of innovation relative to population size.
Such a measure highlights the country’s strong research and development intensity in emerging technologies.
While China and the US dominate in total patent volume, South Korea ranks first in innovation density and third in the number of notable AI models, indicating a balanced performance across research output and technological deployment.
The findings also point to rapid growth in generative AI adoption, alongside sustained legislative activity.
Over recent years, multiple AI-related laws have been enacted, positioning South Korea among the leading economies in developing governance frameworks to support innovation.
The combination of technical output, policy support and adoption trends illustrates how coordinated national strategies can strengthen AI ecosystems, linking research capacity with regulatory development and real-world application.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
An independent review by UK Research and Innovation has assessed the performance of The Alan Turing Institute. The evaluation examined whether the institute meets expectations as a national centre for AI and data science.
Findings recognise scientific excellence, strong partnerships and valuable contributions within the UK research system. However, the review identifies the need for a clearer strategic purpose and stronger delivery.
The panel concludes that alignment with national priorities and value for money is not yet satisfactory. Recommendations include improved governance, clearer prioritisation and renewed external scientific scrutiny.
Additional proposals call for stronger stakeholder engagement and a defined mission focused on resilience, security and defence. A framework for value for money is also expected to be agreed with the Engineering and Physical Sciences Research Council.
UK Research and Innovation will work with the institute’s leadership and partners to implement the changes. A development plan is expected by September 2026, with further assessment to follow.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Patent Office (EPO) has reinforced cooperation with industry stakeholders through discussions with the German Association of Industry IP Experts, focusing on strengthening the European patent system and supporting innovation.
A meeting that brought together representatives from major industrial actors to align priorities and explore future collaboration.
Discussions between the EPO and the stakeholders centred on enhancing technology transfer, empowering startups and fostering economic growth across Europe.
Participants emphasised the importance of inclusive engagement among patent system users instead of fragmented approaches, ensuring that innovation strategies reflect both industrial and societal needs.
The Unitary Patent system was highlighted as gaining traction, particularly among smaller entities such as SMEs, individual inventors and research organisations. Such a trend reflects broader efforts to improve accessibility and scalability within the European innovation ecosystem.
AI also featured prominently, with both sides recognising its growing role in improving efficiency and quality in patent processes.
A human-centric approach remains essential, ensuring that AI deployment supports responsible innovation while maintaining high standards in patent examination and services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Commodity Futures Trading Commission (CFTC), an independent agency of the United States federal government, announced the creation of an Innovation Task Force to support the development of new technologies in US derivatives markets. Chairman Michael S. Selig leads the initiative and focuses on establishing clear regulatory approaches.
The task force will work with the Innovation Advisory Committee to develop frameworks covering crypto assets, blockchain technologies, AI and autonomous systems, and prediction markets. Authorities said the aim is to provide clarity for innovators building new financial products.
According to Selig, clearer rules are intended to support responsible innovation and ensure market participants remain competitive. The task force is also expected to help implement the Commission’s broader innovation agenda.
Coordination with other federal bodies is planned, including collaboration with the US Securities and Exchange Commission and its Crypto Task Force. Michael J. Passalacqua, senior advisor to the Chairman, has been appointed to lead the initiative.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Businesses are beginning to prepare for the commercial potential of quantum computing, a technology that leverages quantum mechanics to solve problems beyond the capabilities of classical computers.
Early engagement focuses on awareness, training, and workshops to explore possible applications across sectors such as pharmaceuticals, energy, finance, and advanced materials.
Companies face several barriers to readiness, including limited technological maturity, unclear business implications, high costs for access and staff training, and a shortage of talent with both quantum and industry expertise.
These obstacles mean that most readiness initiatives remain concentrated in large, research-intensive firms, leaving smaller companies at risk of falling behind.
Support mechanisms are helping firms navigate these challenges. Networking, advisory services, technology centres, R&D grants, and stakeholder consultations help firms access resources and partnerships to accelerate readiness and link research with commercial use.
Building quantum readiness will require ongoing investment in skills, infrastructure, and partnerships, alongside policies that combine exploratory pilots with long-term workforce and software support.
Hybrid approaches integrating quantum computing with AI and high-performance computing offer practical entry points for early adoption, strengthening competitiveness and innovation across industries.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Trump Administration unveiled a national AI framework to boost competitiveness, security, and benefits for Americans. The plan seeks to ensure that AI innovation supports all citizens while maintaining public trust in the technology.
Six key objectives form the foundation of the policy. These include protecting children online, empowering parents with tools to manage digital safety, strengthening communities and small businesses, respecting intellectual property, defending free speech, and fostering innovation.
The framework also prioritises workforce development to prepare Americans for AI-driven job opportunities.
Federal uniformity is considered critical to the plan’s success. The Administration warns that a patchwork of state regulations could stifle innovation and reduce the United States’ ability to lead globally.
Congress is encouraged to collaborate closely to implement the framework nationwide.
The Administration emphasises that the United States must lead the AI race, ensuring the benefits of AI reach all Americans while addressing challenges such as privacy, security, and equitable access to opportunities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Digital technologies and AI are increasingly shaping economic development, governance and international cooperation. As these technologies expand rapidly, international organisations are working to ensure that innovation is accompanied by responsible governance, inclusive access and coordinated global policies.
Within the United Nations system, a range of initiatives aim to strengthen cooperation on digital transformation and the development of AI. These efforts address issues such as digital infrastructure, data governance, technological innovation and equitable participation in emerging digital ecosystems. International collaboration plays an essential role in ensuring that the benefits of digital technologies support sustainable development while reducing global inequalities in access to digital resources.
Several programmes across the United Nations system reflect these priorities, combining global governance initiatives with practical AI applications in areas such as development, humanitarian response and digital inclusion. The following sections examine selected initiatives that illustrate how AI and digital cooperation are being advanced across different areas of the UN system.
Global Digital Compact
The Global Digital Compact is a comprehensive international framework adopted by United Nations member states to guide global digital cooperation and enhance the governance of AI. Negotiated by the 193 member states and reflects broad consultations aimed at shaping a shared vision for a digital future that is open, inclusive, safe, and secure for all. The Compact is part of the Pact for the Future, adopted at the 2024 Summit of the Future in New York.
At its core, the Compact seeks to address persistent digital divides by promoting universal connectivity, affordable access and inclusive participation in the digital economy. Governments and stakeholders have committed to connecting all individuals, schools, and hospitals to the internet, increasing investment in digital public infrastructure, and ensuring that technologies are accessible in diverse languages and formats.
The Compact also emphasises human rights and the protection of fundamental freedoms in the digital space, calling for the strengthened legal and policy frameworks that uphold international law and protect users from harms such as misinformation and discrimination. It promotes an open, global, stable, and secure internet while supporting access to independent, fact-based information.
The key objective of the Compact is to enhance international cooperation on data governance and AI for the benefit of humanity. It includes commitments to develop interoperable national data governance frameworks, advance responsible and equitable approaches to AI governance, and establish mechanisms for global dialogue and scientific guidance on AI. These elements reflect the need for collaborative, multistakeholder governance that balances innovation with transparency, accountability, and respect for human rights.
Independent International Scientific Panel on AI
The Independent International Scientific Panel on AI is a mechanism called for within the Global Digital Compact to support evidence‑based policymaking in AI governance. Member states requested the establishment of a multi‑disciplinary panel under the United Nations to assess the opportunities, risks and societal impacts of AI, and to promote scientific understanding across geographic and sectoral divides.
The panel is intended to contribute robust, independent scientific analysis to global AI discussions, ensuring that policy decisions are grounded in research rather than short‑term market pressures or fragmented national approaches. Its mandate includes conducting comprehensive risk and impact assessments, developing common methodologies for evaluating AI systems, and advising on interoperable governance frameworks that respect human rights and international law.
By bringing together experts from diverse disciplines and regions, the panel aims to bridge the gap between scientific developments and policymaking. It is a key institutional mechanism for fostering inclusive AI governance, with balanced geographic representation to ensure that insights reflect global needs rather than narrow technological interests.
The panel also complements the broader Global Dialogue on AI Governance, which seeks to engage governments, international organisations, civil society and technical communities in ongoing discussions about normative approaches, standards, and principles for global AI governance.
The UN Digital Cooperation Portal
The UN Digital Cooperation Portal is a central platform designed to support the implementation of the Global Digital Compact by mapping global digital cooperation activities and facilitating coordination among diverse stakeholders. The portal invites governments, UN entities, civil society organisations, researchers, and private sector actors to voluntarily submit information on initiatives related to the Compact’s objectives.
Launched in December 2025, the portal aggregates initiatives across thematic areas, including digital inclusion, AI governance, data governance, digital infrastructure, and the protection of human rights online. By visualising how activities align with agreed international frameworks, the platform supports strategic collaboration, strengthens transparency and highlights opportunities for joint action across regions and sectors.
The portal generates interactive data visualisations that illustrate how digital cooperation initiatives are evolving at the national, regional and global levels. These tools help identify gaps and overlaps in current efforts, enabling stakeholders to coordinate more effectively in pursuit of shared objectives such as closing digital divides and advancing equitable digital development.
As a resource for governments, UN agencies and external partners, the portal also contributes to the preparatory process for the high‑level review of the Global Digital Compact scheduled for 2027, providing an evidence‑based foundation assessing progress and emerging policy priorities.
Closing the language gap in AI through local language accelerators
Language diversity remains one of the major challenges in global AI development. More than half of the world’s population speaks one of over seven thousand languages, yet most AI systems currently support only a small number of widely used global languages.
Around 1.2 billion people rely on low-resource languages that remain poorly represented in digital technologies. Limited language representation can restrict access to AI-powered services in sectors such as agriculture, healthcare, education and civic participation.
The initiative combines technological development with partnerships involving universities, research institutions and local language communities. The technologies involved include optical character recognition systems that digitise written texts, automatic speech recognition tools capable of processing spoken language and text-to-speech technologies that generate digital audio.
Using satellite imagery and AI to improve disaster response
Rapid damage assessment plays a critical role in humanitarian response following natural disasters. Traditional assessment methods often require manual analysis of satellite images and field inspections conducted by experts, a process that can take weeks.
Emergency response operations, however, require reliable information within the first seventy-two hours after a disaster to prioritise rescue operations and humanitarian assistance.
The SKAI platform, developed by the World Food ProgrammeInnovation Accelerator, uses AI-based computer vision to analyse satellite imagery and identify damaged buildings automatically. The system enables humanitarian organisations to assess destruction at the level of individual structures across large geographic areas.
Developed as an open-source project in collaboration with Google Research, the platform can generate prioritised damage assessments within approximately twenty-four hours. Since 2022, the system has analysed more than 3.9 million buildings and identified around 450,000 severely damaged or destroyed structures.
Expanding inclusive participation through the UN Women AI School
Increasing participation in AI development is another priority across the United Nations system. Women remain underrepresented in many AI-related fields, including machine learning engineering and data science.
The UN WomenAI School addresses this challenge by providing training programmes designed for policymakers, civil society organisations, UN staff, and young innovators. The initiative aims to strengthen AI literacy and encourage broader participation in shaping the future of digital technologies.
Participants follow structured training tracks combining technical education with discussions on AI governance, ethics, and social impact. Collaborative learning environments encourage participants to develop solutions tailored to the needs of their communities.
More than three thousand participants have taken part in the programme since its launch. A train-the-trainer (ToT) model enables graduates to support future training programmes and expand the initiative to additional regions.
Responsible AI in satellite technologies and earth observation
AI technologies are increasingly integrated into satellite systems and Earth observation platforms. These systems analyse large volumes of geospatial data and generate near-real-time insights about environmental conditions.
Applications include monitoring climate change, analysing natural disasters, and supporting environmental policy planning. Rapid technological progress in this field also raises governance challenges related to transparency and accountability.
Many AI models used in satellite analysis operate as black box systems whose internal decision-making processes are difficult to interpret. Limited transparency can create risks when such systems are used to inform critical policy decisions.
Data bias represents another concern. Training datasets often originate primarily from the Global North, which may lead to inaccurate interpretations of environmental conditions in other regions of the world.
The methodology examines multiple dimensions of national AI ecosystems, including infrastructure, research capacity, institutional readiness and regulatory frameworks. Rather than ranking countries, the assessment identifies strengths and areas requiring further development.
Since its introduction in 2022, the methodology has been implemented in more than seventy countries. More than seventeen thousand stakeholders have participated in consultations associated with the initiative.
Assessment results have contributed to the development of national AI strategies and policy frameworks in several regions. An updated version of the methodology is expected to be released in 2026.
Additionally, UNESCO promotes the ethical development and use of AI through its Recommendation on the Ethics of Artificial Intelligence. The global framework sets out principles on transparency, accountability, fairness, and respect for human rights to guide national policies and international cooperation.
AI for Good and global capacity building
The International Telecommunication Union coordinates the AI for Good initiative, which focuses on applying AI technologies to global challenges while strengthening international cooperation in governance and standards.
The programme operates across multiple areas, including multistakeholder dialogue, technical standard development, governance support and capacity development activities.
More than four hundred AI-related standards have already been developed in areas such as multimedia technologies, energy efficiency and cybersecurity. Governance dialogues organised through the initiative have involved more than one hundred ministers and regulators.
Educational programmes linked to the initiative aim to expand digital skills among young people worldwide through robotics competitions, machine learning challenges and educational partnerships.
The AI for Good Global Summit 2026, set to take place from 7–10 July in Geneva, will convene governments, industry leaders and civil society to advance AI governance, promote responsible innovation, and highlight initiatives that foster inclusive and equitable digital development.
AI tools supporting refugee entrepreneurship
AI technologies are also being used to support the economic opportunities for displaced populations. The United Nations Refugee Agency has developed an AI-powered virtual assistant designed to help refugees and asylum seekers transform business ideas into structured business plans.
The platform guides users through financial planning, market analysis and the preparation of investment proposals. The development of the system involved collaboration with NGOs, governments, and entrepreneurial networks across Latin America.
The tool was initially implemented in Paraguay and was designed with input from refugee communities. Remote access allows users to engage with the platform regardless of geographical or institutional constraints.
More than 340 refugee entrepreneurs have used the platform since its launch, with women representing approximately sixty percent of participants. The model is designed to be scalable and could be implemented in additional regions.
Promoting responsible innovation in civilian AI for peace and security
The rapid expansion of AI technologies brings increasing security challenges, particularly due to the potential misuse of civilian AI systems in military, conflict-related, or high-risk contexts. Dual-use applications mean that tools designed for civilian purposes, such as data analysis or autonomous systems, could also be repurposed in ways that threaten international peace, stability or human safety.
The United Nations Office for Disarmament Affairs works to foster responsible innovation practices, ensuring that the development and deployment of AI technologies consider their broader implications for global peace and security. Addressing these risks requires ongoing collaboration and dialogue among policymakers, researchers, industry stakeholders, and civil society, creating a shared framework for understanding and mitigating potential threats.
To support this, the programme organises a comprehensive set of initiatives, including thematic multistakeholder dialogues, academic workshops, public panels, private sector roundtables and in-person training sessions for graduate students. These activities aim not only to raise awareness of emerging security risks, but also to provide practical guidance and tools that promote safe, transparent and accountable AI practices in civilian applications worldwide.
UN 2.0 Communities of Practice
Knowledge sharing and collaboration are strengthened through UN 2.0 Communities of Practice, connecting partners across the United Nations system and beyond. The networks facilitate the exchange of expertise and approaches on digital transformation, data strategy, innovation, and strategic foresight.
Over 18,000 practitioners from more than 160 countries participate, enhancing the collective capacity to address complex AI and digital challenges. Thematic groups, including those focused on digital and data initiatives, support peer-to-peer engagement, professional development, and collaborative problem-solving. Participation allows stakeholders to contribute to a wider ecosystem of expertise and innovation, promoting inclusive digital governance and supporting the Sustainable Development Goals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers in the UK will gain a new AI lab designed to drive transformational breakthroughs in healthcare, transport, science, and everyday technology, supported by government funding.
The lab will provide up to £40 million in funding over six years, alongside substantial access to large-scale computing resources, inviting UK researchers to pitch their most ambitious ideas.
The Fundamental AI Research Lab will focus on tackling core AI challenges, including hallucinations, unreliable memory, and unpredictable reasoning.
The lab will support high-risk, blue-sky research rather than simply scaling existing systems. Its goal is to unlock entirely new capabilities that could improve medical diagnoses, infrastructure resilience, scientific discovery, and public services.
UK officials highlighted the country’s strength in world-class universities, AI talent, and a thriving sector attracting over £100 billion in private investment. Experts, including Raia Hadsell of Google DeepMind, will peer-review funding applications, prioritising bold, high-reward proposals.
The initiative is part of the UKRI AI Strategy, which is backed by £1.6 billion and aims to strengthen research and ensure AI benefits society and the economy. UK AI projects like RADAR for rail faults and the IXI Brain Atlas for Alzheimer’s research demonstrate the approach’s potential impact.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Africa’s fintech sector has evolved from a niche disruptor into a pillar of the digital economy, fuelled by rapid digital adoption and entrepreneurial growth. Regulators are now tasked with supporting innovation in decentralised finance and AI while safeguarding market stability and consumer protection.
Coordinated oversight has been central to that effort. The Intergovernmental Fintech Working Group, bringing together the National Treasury, the South African Reserve Bank and the Financial Sector Conduct Authority, promotes a harmonised and principle-based regulatory approach.
A significant turning point came when crypto assets were classified as financial products under the Financial Advisory and Intermediary Services Act. Licensing requirements for Crypto Asset Service Providers and alignment with Financial Action Task Force standards strengthened consumer safeguards and anti-money laundering controls.
Fintech also plays a growing role in financial inclusion, particularly through mobile money, digital lending and digital payments. Wider access to affordable financial tools supports inclusive economic growth across underserved communities.
AI presents fresh regulatory questions around bias, transparency and operational resilience. Ensuring compliance with the Protection of Personal Information Act while encouraging responsible experimentation remains central to South Africa’s evolving fintech strategy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!