Digital on Day 4 of UNGA80: Governance, inclusion, and child safety in the AI age

 Lighting, Stage, Purple, Electronics, Screen, Computer Hardware, Hardware, Monitor, Light, Urban, Indoors, Club

Welcome to the fourth daily report from the General Debate at the 80th session of the UN General Assembly (UNGA80). Our daily hybrid AI–human reports bring you a concise overview of how world leaders are framing the digital future.

On Day 4, artificial intelligence (AI) governance, digital cooperation, and the critical issue of child safety in the digital space stood out in the statements. Member states underlined that the transformative potential of AI for development – from the green energy transition to improved public services – is inextricably linked to the urgent need for global governance. Several leaders welcomed the new AI mechanisms established by UNGA, while others called for new frameworks to manage risks, particularly those related to cybercrime, disinformation, and the mental health of youth. A recurring theme was the need to actively address the digital divide through investments in digital infrastructure, skills, and technology transfer, stressing that the benefits of this new era must be shared fairly with all. The discussions reinforced the message that tackling these complex, interconnected challenges requires mature multilateralism and reinforced international cooperation.

To keep the highlights clear and accessible, we leave them in bullet points — capturing the key themes and voices as they emerge.


Global digital governance and cooperation

  • The opportunities and consequences of the digital revolution are among today’s complex and interconnected challenges. They cannot be solved by acting alone. (Ireland)
  • Information wars and the regulation of AI are among the global challenges to tackle and which require solidarity among member states. (Cote d’Ivoire)
  • Addressing technological challenges that overwhelmed natural systems, economies, and even basic human rights require international cooperation and the United Nations. (Belize)
  • Global governance rules should be improved at a faster pace, and cooperation should be strengthened so that technological progress can bring real benefits to humanity. (China)
  • There is a call to strengthen multilateral governance, defend international law, promote human rights, and adopt joint measures to address global technological challenges. (Andorra)
  • The UN must embrace digital diplomacy for the AI age. (Malta)
    Inclusive, multistakeholder approaches to global digital governance, AI, and space technologies can ensure that they advance the Sustainable Development Goals (SDGs). (Bulgaria)
  • The Global Digital Compact is welcomed. (Cote d’Ivoire) It is an opportunity to strengthen multilateralism, which is needed for its implementation and a more inclusive global governance. (Saint Vincent and the Grenadines, Tonga) The Compact is not a luxury, but a necessity for developing countries (Lesotho), as it can help advance equitable access to digital technologies (Cabo Verde). The broader Pact for the Future provides a roadmap for protecting people and the planet. (Barbados)

Artificial intelligence

Responsible AI (governance)

  • Without safeguards, AI can be very dangerous. It can impact children’s mental health, spread disinformation, cause displacements on the job market, and concentrate immense power in the hands of a few multinational corporations. (Greece)
  • Unregulated AI, while having tremendous promise, poses significant risk. Preserving a rule-based international system can help address the risk. (Barbados)
  • There is a need to build a global governance architecture through multilateral negotiations that will ensure safe, secure, ethical, and inclusive use of AI. The capabilities of this technology should be harnessed responsibly and collectively. (Mauritius)
  • The growing challenge of AI requires a mature multilateralism to tackle successfully. (Saint Vincent and the Grenadines)
  • AI and other technologies should adhere to the principles of people-centred development, technology for good and equitable benefits, and require improving relevant governance rules and strengthening global governance cooperation. (China)
  • A call was made for the adoption of binding universal standards to regulate the use of AI and ensure it is used to achieve development for the benefit of all. (Cote d’Ivoire)
  • A call was made for an international convention to regulate and govern the development of AI. (Bahrain)
  • Support is expressed for efforts to develop a governance framework to manage responsible use of A for development. (Solomon Islands)
  • The establishment of the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance is welcomed, as these lay the foundations of a global architecture where AI can be steered by science and guided by cooperation. They can also help to avoid deepening inequality and leaving people exposed to the risk and exploitation of AI and the distortion of facts. (Greece, Barbados, Mauritius, Zimbabwe)
  • There is a proposal for a Global AI Governance Initiative and the establishment of a World AI Cooperation Organisation. (China)

AI for development and growth

  • The transformatory potential of AI as a tool for development was recognised. (Greece, Zimbabwe)
  • AI and data analytics offer real opportunities to drive an inclusive, just energy transition, particularly through off-grid solutions and smarter energy planning. (Samoa)
  • The benefits of AI, large language models, and quantum computing must not be biased, and their benefits must be shared fairly with all to avoid creating an entire generation who feel excluded and marginalised, making them vulnerable to harmful temptations. (Bangladesh)
  • Cooperating with Israel will provide Arab and Muslim leaders with groundbreaking Israeli technologies, including in AI. (Israel)

Cybersecurity and cybercrime

  • Transnational criminal networks involved in cybercrime are an existential threat to states. (Jamaica)
  • Criminals are misusing technology for harmful behaviours, with destabilising consequences. Establishing frameworks and strategies to combat the use of technology for criminal purposes is supported. (Zimbabwe)
  • Support is expressed for efforts to develop a governance framework to address cybersecurity challenges. (Solomon Islands)
  • Cybersecurity is one area of cooperation with the EU, the USA, and Brazil. (Cabo Verde)
  • Partnership is sought with states, organisations, and regional and international groupings to strengthen cybersecurity. (Bahrain)
  • There is a need for an open and secure internet. (Bulgaria)

Human rights in the digital space

  • Support is expressed for efforts to develop a governance framework to address the protection of data and privacy. (Solomon Islands)
  • Multilingualism must be promoted, especially in a context of homogenisation and digitalisation where gaps may leave people behind, as it facilitates inclusive dialogue. (Andorra)
  • An open and secure internet and the protection of human rights are emphasised. (Bulgaria)

Child safety and rights

  • In the digital age, children face new risks and threats, often invisible. A Centre for Digital Well-being and Digital Skills and Competencies and a Digital Well-being Plan for Children and Youth have been created, with specific actions to protect minors and youth in the digital environment. Regulatory and technical frameworks are sought, with the ITU and other agencies, to ensure the internet is a tool for development and child protection. (Andorra)
  • A safe, inclusive digital environment is needed that places children’s rights at the very heart of it. (Andorra)
  • There is a need to protect the mental health of children from the unsupervised experiment run with their brains by platforms where harmful content and addictive scrolling are intentional. Big platforms can no longer profit at the expense of children’s mental health, and a business model built on addictive algorithms that feed what can be labelled as digital junk is unacceptable. Digital technology is no different than any other industry that needs to operate under health and safety regulations, guided by the principle: “Do no harm.” (Greece)
  • A proposal for a pan-European Digital Age of Majority to access digital platforms is being examined by the European Commission, with support from 13 EU Member States. (Greece)
  • Laws are being strengthened to protect children susceptible to harm from technology in this digital age. (Tonga)

Disinformation and hate speech

  • Disinformation and fake news undermine trust. (Pakistan)
  • The spread of fake news distorts reality and threatens the stability of societies, creating a platform for hate to thrive and prejudice to rise, contributing to the “crisis of truth”. (Barbados)
  • Disinformation and hate speech have become matters of grave concern, compounded by the deliberate use of fake news and AI-driven deepfakes. Cooperation is needed to confront these challenges before they erode trust and weaken social harmony. (Bangladesh)
  • Support is expressed for efforts to develop a governance framework to combat misinformation. (Solomon Islands)
  • An international convention is called for to combat religious hate speech and racism and ban the abuse of digital platforms to incite extremism, radicalism, or terrorism. (Bahrain)

Digital technologies for development

Opportunities, risks, and applications

  • Technology is both our greatest shared opportunity and one of the defining challenges for our future prosperity. (Greece) Advancements in technologies like AI and network communications, along with their benefits, also bring potential risks. (China) 
  • The principles of people-centred development, technology for good, and equitable benefits need to be adhered to. (China)
  • The digital transformation, including AI and data analytics, offers real opportunities to drive an inclusive, just energy transition, particularly through off-grid solutions and smarter energy planning. (Samoa)
  • High-tech innovation developed in Taiwan – including semiconductors, AI, biotech – are vital to global supply chain security and sustainable development. (Belize)
  • Digitisation, AI and crypto are embraced as tools of the future. (Pakistan)
  • Digital innovation is promoted as a way to enable a safe, stable, prosperous, and sustainable environment. (Bahrain)
  • The clean energy potential of the country presents an opportunity to host data centres powered sustainably by renewable energy, which would advance Africa’s digital transformation. Openness is expressed for investment and partnerships in building global data centres. (Lesotho)
  • A global SIDS data hub within the SIDS Center of Excellence in Antigua has launched to improve data, secure investments, and achieve debt sustainability. (Antigua and Barbuda)
  • Examples were given of sectors where digital transformation is introduced: taxation, customs, and land deeds issuance (Togo); online trade union registration (Bangladesh); online healthcare services (Belize).
  • Access to media platforms and new technologies has been weaponised to coerce the forced compliance with some climate goals. (Trinidad and Tobago)

Digital inclusion and access

  • The digital divide should not be allowed to widen further. Resource and capacity constraints of developing economies have to be acknowledged and addressed. (Mauritius)
  • In the fast-changing technological era, a deep concern is the widening digital divide facing youth in the developing world, where the benefits of quantum computing, AI, and large language models must be shared fairly. (Bangladesh)
  • The need for digital inclusion is emphasised. Digital connectivity is prioritised at a national level. (Bulgaria)
  • Priority is given to investment in affordable digital infrastructure, promoting digital literacy, and nurturing innovation ecosystems, with a focus on empowering youth, women, and rural communities. (Lesotho)
  • Investments are made in digital literacy and IT and AI-related skills for the young generation. (Bangladesh) Pilot programmes are run in AI education, and teachers and students will soon engage with custom-designed AI teaching assistants. (Greece)

Technology transfers, cooperation, and support

  • A commitment to official development assistance, technical cooperation, and the sharing of knowledge and best practices is reaffirmed. (Andorra)
  • Member states must commit to technical assistance. (Jamaica)
    There is a call for increased technology transfers and capacity building initiatives. (Tuvalu)
  • Ensuring access to knowledge, data and science is needed to inform strategic planning, enhance resilience, and foster global cooperation in the maritime field. (Tuvalu)
  • Maritime domain awareness and the provision of satellites and data sharing services should be forms of standard support for SIDS in their efforts to protect marine ecosystems. (Antigua and Barbuda)
  • Reparations must also take the form of sustainable investment in technology (along with other areas) to allow Africa to develop and fully enjoy its potential. (Togo)

For other topics discussed, head over to our dedicated UNGA80 page, where you can explore more insights from the General Debate.

Diplo NEWS25 Insta UNGA
The General Debate at the 80th session of the UN General Assembly brings together high-level representatives from across the globe to discuss the most pressing issues of our time. The session took place against the backdrop of the UN’s 80th anniversary, serving as a moment for both reflection and a forward-looking assessment of the organisation’s role and relevance.

Weekly #231 UNGA80 turns spotlight on digital issues and AI governance

 Logo, Text

19 – 26 September 2025


HIGHLIGHT OF THE WEEK

Technology is everywhere at this year’s UN General Assembly. Whether in the General Debate, side events on digital prosperity, or the launch of a new dialogue on AI governance, governments and stakeholders confronted the urgent question of how to ensure that digital transformation serves humanity. Here are the key moments from the week to date.

Digital Cooperation Day: From principles to implementation in global digital governance

On 22 September, the UN Office for Digital and Emerging Technologies (ODET) hosted Digital Cooperation Day, marking the first anniversary of the Global Digital Compact. The event gathered leaders from governments, business, academia, and civil society to discuss how to shift the focus from principle-setting to implementation of digital governance. Discussions covered inclusive digital economies, AI governance, and digital public infrastructure, with sessions on privacy, human rights in data governance, and the role of technology in sustainable development and climate action. Panels also explored AI’s impact on the arts and innovation, while roundtables highlighted strategies for responsible and equitable technology use. The Digital Cooperation Day is set to become an annual platform for reviewing progress and addressing new challenges in international digital cooperation. 

The General Debate of the UNGA80

The General Debate opened on 23 September under the theme ‘Better together: 80 years and more for peace, development and human rights’. While leaders addressed a broad spectrum of global challenges, digital and AI governance were recurring concerns.

 Art, Collage, People, Person, Text

Day 1 debates circled around a central message: technology must remain a servant of humanity, not its master. From calls to ensure AI benefits all societies and to build universal guardrails for its responsible use, to concerns over cybercrime, disinformation, and the governance of critical minerals exploitation, delegations stressed the urgent need for cooperation, inclusivity, and safeguards.

Day 2 debates underscored a need to align rapid technological change with global governance, with countries calling for stronger international cooperation and responsible approaches to the development and use of technology. Delegations emphasised that digital technologies must serve humanity – advancing development, human rights, and democracy – while also warning of the growing security risks posed by AI misuse, disinformation, hybrid warfare, and cyber threats. Alongside some calls for rules and ethical standards, many highlighted the importance of inclusion, investment in digital infrastructure, and ensuring that all states can share in the benefits of the digital age.

On Day 3 of the UN General Assembly’s 80th session, AI and digital transformation remained at the forefront of global debates. Member states voiced both optimism and concern: from calls for ethical, human-centred governance of AI and stronger safeguards for peace and security, to warnings about disinformation, repression, and widening digital divides. Governments also highlighted the promise of digital technologies for development, stressing the importance of inclusion, connectivity, and technology transfer. The discussions underscored a common thread—while digital innovation offers extraordinary opportunities, its risks demand global cooperation, shared standards, and a commitment to human dignity.

Diplo and the Geneva Internet Platform are providing reporting from this event, which will last through 30 September, so be sure to bookmark our dedicated web page.

Digital@UNGA 2025: Digital for Good – For People and Prosperity

On 23 September, the International Telecommunication Union (ITU) and the UN Development Programme (UNDP) hosted Digital@UNGA 2025: Digital for Good – For People and Prosperity. The anchor event spotlighted digital technologies as tools for inclusion, equity, and opportunity. Affiliate sessions throughout the week explored trust, rights, and universal connectivity, while side events examined issues ranging from AI for the SDGs and digital identity to green infrastructure, early-warning systems, and space-based connectivity. The initiative sought to showcase digital tools as a force for healthcare, education, and economic empowerment, and to inspire action and dialogue towards an equitable and empowering digital future for all.

Security Council debate on AI

The UN Security Council held a high-level debate on AI, highlighting its promise and its urgent risks for peace and security. The debate, chaired by the Republic of Korea President Lee Jae Myung, underscored a shared recognition that AI offers enormous benefits, but without strong global cooperation and governance, it could deepen divides, destabilise societies, and reshape warfare in dangerous ways.

The launch of the Global Dialogue on AI Governance

A major highlight was the High level Meeting to Launch Global Dialogue on AI Governance on 25 September. 

Senior leaders outlined how AI could drive economic growth and development, particularly in the Global South, while plenary discussions saw stakeholders present their perspectives on building agile, responsive and inclusive international AI governance for humanity. A youth representative closed the session, underscoring younger generations’ stake in shaping AI’s future.

The Global Dialogue on AI Governance is tasked, as decided by the UN General Assembly this August, with facilitating open, transparent and inclusive discussions on AI governance. The dialogue is set to have its first meeting in 2026, along with the AI for Good Summit in Geneva. 

Launch of open call for Independent International Scientific Panel on AI

The UN Secretary-General has launched an open call for candidates to join the Independent International Scientific Panel on Artificial Intelligence. Agreed by member states in September 2024 as part of the Global Digital Compact, the 40-member Panel will provide evidence-based scientific assessments on AI’s opportunities, risks, and impacts. Its work will culminate in an annual, policy-relevant – but non-prescriptive – summary report presented to the Global Dialogue, along with up to two updates per year to engage with the General Assembly plenary. Following the call for nominations, the Secretary-General will recommend 40 members for appointment by the General Assembly.


IN OTHER NEWS THIS WEEK

Global initiative calls for AI red lines by 2026

A coalition of global experts and leaders has launched the Global Call for AI Red Lines, an initiative that calls for clear red lines to govern the development and deployment of AI.

The initiative warns that advanced AI could soon far surpass human capabilities, escalating risks such as engineered pandemics, mass disinformation, manipulation of individuals—including children, security threats, widespread unemployment, and human rights violations. Some systems have already exhibited harmful or deceptive behaviour, and left unchecked, meaningful human control may become increasingly difficult.

The campaign calls for an operational international agreement on red lines for AI, with robust enforcement mechanisms by 2026, building on existing frameworks and corporate commitments to ensure all advanced AI developers are held accountable.

Signatories include Nobel laureates, former heads of state, and leading AI researchers such as Geoffrey Hinton, Ian Goodfellow and Yoshua Bengio. OpenAI co-founder Wojciech Zaremba, authors Yuval Noah Harari and Stephen Fry.

Why it matters: Warnings about AI’s potentially existential threats are far from new. As early as the 1960s, computer scientist I.J. Good cautioned about an ‘intelligence explosion,’ in which machines could rapidly surpass human cognitive abilities. Today, it often feels like there’s an AI researcher or some other public figure raising concerns about the technology every week. So what makes this initiative stand out? It combines high-profile backing, a demand for an international agreement on red lines, and a concrete timeline. Let’s see what impact it will have. 


TikTok’s great American makeover

With an executive order, US President Donald Trump brought the protracted TikTok drama to a climax, paving the way for a new company—led by American investors who will own 80% of the platform—to take control of the app. TikTok’s (soon to be former) parent company, ByteDance, and its Chinese investors will retain a minority stake of less than 20%.

A new US-led joint venture will oversee the app’s algorithm, code, and content moderation, while all American user data will be stored on Oracle-run servers in the USA. The venture will have a seven-member board, six of whom are American experts in cybersecurity and national security. 

Media reports that the US investor group is led by software giant Oracle, while prominent backers include private equity firm Silver Lake, media moguls Rupert and Lachlan Murdoch, and Dell CEO Michael Dell. 

The crux of the matter: All US user data will be stored securely on Oracle-run servers in the USA, preventing foreign control. Software updates, algorithms, and data flows will face strict monitoring, with recommendation models retrained and overseen by US security partners to guard against manipulation.

The US government has long argued that the app’s access to US user data poses significant risks, as ByteDance is possibly subject to the Chinese 2017 National Intelligence Law, which requires any Chinese entity to support, assist, and cooperate with state intelligence work – including, possibly, the transfer of US citizens’ TikTok data to China. On the other hand, TikTok and ByteDance maintained that TikTok operates independently and respects user privacy.

What’s next? There are still some details to hash out, such as whether US users will be installing a new app altogether. Nevertheless, this agreement marks a significant step in resolving one of the most high-profile tech-policy disputes of the decade. Plus, the executive order leaves 120 days for the deal to take place.

The bottom line: For millions of American users, the political wrangling is background noise. The real change will be felt in their feeds—whether the new, American-guarded TikTok can retain the chaotic creativity that made it a cultural force.


Apple urges EU to scrap Digital Markets Act, calls for ‘fit for purpose’ alternative

Apple has formally requested that the European Commission repeal the Digital Markets Act (DMA), Europe’s landmark digital antitrust law, while ‘a more appropriate fit for purpose legislative instrument is put in place.’ 

This does not come out of left field: The European Commission has launched a public consultation on the first review of the DMA on 3 July, with 24 September being the deadline for submitting views.

Narrowly meeting the deadline, Apple has submitted a view arguing the DMA leaves it with two bad choices: either weaken the security and smooth experience of its devices by opening up to rivals, or hold back features from EU users. It points to delayed launches of tools like Live Translation with AirPods, iPhone Mirroring, and improved location services, which Apple says depend on tight integration that the DMA prevents.

The big picture: Critics in the US argue that European digital regulations unfairly target US tech giants. Apple has acknowledged the challenge, saying, ‘Over time, it’s become clear that the DMA isn’t helping markets. It’s making it harder to do business in Europe.’ 

EU digital affairs spokesman Thomas Regnier noted that the commission was ‘not surprised’ by the tech giant’s move. ‘Apple has simply contested every little bit of the DMA since its entry into application,’ Regnier said. Despite these complaints, the EU remains firm: thanks to the DMA, companies have the right to compete fairly and gatekeepers, like Apple, must allow interoperability of third-party devices with their operating systems, Reinger underlined

The bottom line: Compliance with the DMA is mandatory, and there is little indication that the rules will ease.

The most likely outcome is that Apple continue operating under the DMA while seeking ways to adapt and lobby for adjustments that reduce disruption. European users may see some delays in new features or modifications to services, though.


Record $2.5b settlement forces Amazon to overhaul Prime sign-up and cancellation practices

Amazon has agreed to a $2.5 billion settlement with the US Federal Trade Commission (FTC) over deceptive Prime membership practices. The FTC’s investigation, initiated in June 2023, revealed that Amazon enrolled customers into its Prime program without their explicit consent, obscured critical information about costs and terms, and implemented a complex cancellation process designed to deter users from unsubscribing, described as the ‘Illiad process’. Approximately 35 million consumers were affected by these tactics.

Under the terms of the settlement, Amazon is required to pay a $1 billion civil penalty (the largest ever in a case involving an FTC rule violation) and provide $1.5 billion in refunds to consumers harmed by the deceptive Prime enrollment practices (the second-highest restitution award ever obtained by FTC action). 

The settlement requires Amazon to make Prime enrollment and cancellation clear and simple, fully disclose costs and terms, allow easy cancellations, and have an independent supervisor ensure compliance.

This unprecedented settlement underscores the growing scrutiny of tech giants’ business practices and sets a significant precedent for consumer protection enforcement.


The cyberattack that disrupted major European airports

A cyberattack targeting Collins Aerospace, a critical systems provider that operates check-in and boarding platforms for numerous airports around the world, caused widespread disruption at major European airports. Passengers at London Heathrow, Berlin, and Brussels experienced long queues, flight delays, and cancellations throughout Saturday, with some recovery reported on Sunday, though disruptions continued, particularly at Heathrow and Berlin. 

Collins Aerospace confirmed that its Muse software had been hit by a cyberattack and said teams were working to restore services.

In response to the incident, the UK’s National Crime Agency arrested a man in West Sussex on suspicion of computer misuse offences. The suspect has been released on conditional bail while the investigation continues. This might suggest that the investigation is complex and far from concluded. 

Industry experts pointed out that this event highlights the vulnerability of the aviation sector, which often relies on shared software platforms. They suggested that stronger backup systems and better cooperation are needed to improve resilience against such attacks.


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head
Geneva blog

The next meeting of the Geneva Data Community, organised on behalf of the Swiss Federal Statistical Office, will bring together key stakeholders for an exchange on current initiatives and developments in the data field and an update from the World Health Organisation (WHO) on advancing the continuum between data, digital, and AI to improve health outcomes.

CADE Mapping and Baseline Study Reports Launch

 The Civil Society Alliances for Digital Empowerment (CADE) consortium will launch its Mapping and Baseline Study Reports. The reports provide a comprehensive overview of civil society participation in key Internet governance spaces—including the IGF, ICANN, ITU, and IETF—with a particular focus on amplifying underrepresented voices from the Global South. They also put forward practical recommendations to strengthen inclusive and meaningful engagement of civil society in digital policy processes.

800x600pix 6th AIPC

The 6th AI Policy Summit continues the multi-stakeholder dialogue with leading experts in exploring the use of public policy and societal engagement to capture the benefits of artificial intelligence, minimize its risks, and enhance its adoption.



READING CORNER
wtr25 md

The 2025 edition of the World Trade Report reveals that, with the right enabling policies, AI could boost the value of cross-border flows of goods and services by nearly 40% by 2040 thanks to productivity gains and lower trade costs. However, for AI and trade to contribute to inclusive growth — with benefits shared widely — policies need to be in place to bridge the digital divide, invest in workforce skills, and maintain an open and predictable trading environment.

AI concepts

Learn the essential AI vocabulary you need. This guide explains key terms like parameters (7B vs 70B), tokens, context windows, LLMs, and AI hallucination.

Digital on Day 3 of UNGA80: AI governance, digital cooperation, and development take centre stage

 Lighting, Stage, Purple, Electronics, Screen, Computer Hardware, Hardware, Monitor, Light, Urban, Indoors, Club

Welcome to the third daily report from the General Debate at the 80th session of the UN General Assembly (UNGA80). Our daily hybrid AI–human reports bring you a concise overview of how world leaders are framing the digital future.

On Day 3, AI and digital transformation remained at the forefront of global debates. Member states voiced both optimism and concern: from calls for ethical, human-centred governance of AI and stronger safeguards for peace and security, to warnings about disinformation, repression, and widening digital divides. Governments also highlighted the promise of digital technologies for development, stressing the importance of inclusion, connectivity, and technology transfer. The discussions underscored a common thread—while digital innovation offers extraordinary opportunities, its risks demand global cooperation, shared standards, and a commitment to human dignity.

To keep the highlights clear and accessible, we leave them in bullet points — capturing the key themes and voices as they emerge.


Global digital governance and cooperation

  • There is a need for global standards for transparency, and accountability mechanisms to address abuses associated with digital technologies; these should be as dynamic as the technologies themselves. (European Union)
  • Technological breakthroughs, including artificial intelligence, must foster peace, development, and human dignity. (Haiti)

Artificial intelligence

Responsible AI (governance)

  • human-centred approach to AI is favoured – one based on fundamental values, democracy, and the rule of law. With the EU having adopted a regulatory framework for responsible AI, it calls for an equivalent level of ambition in the international domain(European Union)
  • Rapid technological change, especially the rise of artificial intelligence, must be harnessed in a safe, responsible, and inclusive manner(Montenegro)
  • AI is developing with lightning speed and largely unchecked, posing obvious risks to the social fabric without any agreement on rules and boundaries. (Liechtenstein)
  • Global/international cooperation is needed to set AI on the right course (United Kingdom), and ensure AI systems remain safe, secure, and trustworthy. (Micronesia)
  • A move towards multilateral and ethical governance of AI is necessary to guarantee inclusive access and ensure its use is guided by the common good. (Ecuador)
  • The global community must support innovation in emerging technologies like AI while addressing the associated risks(Sweden)
  • AI brings enormous opportunities but also incalculable risks for civilisation, and it should be made a priority of UNGA’s 80th session. (North Macedonia)
  • Concern is expressed about the misuse of AI by capitalism, which could accelerate climate change and liquidate the planet. (Bolivia)
  • The internet, social media platforms, and artificial intelligence reinforce isolation by using algorithms that ensure people receive more of the same content rather than new ideas. (Ghana)

AI for development and growth

  • We must change with the times and take advantage of today’s opportunities such as using AI and other technologies. (Netherlands)
  • AI is the most powerful new lever to advance the UN charter’s vision of social progress and better standards of life. It needs to be forged as a force for freedom, prosperity, and human dignity. (United Kingdom)
  • AI should be championed as a bridge-builder across continents to share its extraordinary potential. (United Kingdom)
  • AI for development is championed through partnerships with African nations to create AI ecosystems that empower communities to meet the sustainable development goals. (United Kingdom)

Digital tech, security, and peace

Cybersecurity

  • Cyber threats are among the major challenges of our time. People trust the UN to tackle such challenges, but often the responses provided have fallen short. (Equatorial Guinea)
  • Micronesia is committed to developing national strategies and policies to safeguard digital data and mitigate the risk of malicious attacks(Micronesia)
  • Artificial intelligence is being used to consolidate repression and empower criminals across the internet. (United Kingdom)

International peace and security

  • New technologies are being utilised to disrupt communications and guidance systems. (Yemen)
  • Wars are now multidimensional, including media, information, and cyberwarfare, war from space, and the use of other technologies that are developed without impunity. (Bolivia)
  • AI, social media, and the internet, including the dark web, carry a potential threat to global peace and security. (Ghana)
  • A call is made for enhanced global cooperation to address the root causes of conflict, including new technologies. (Uganda)
  • The inclusive and constructive dialogue that shaped the first UNGA resolution on AI should serve as a model for discussions on AI, peace, and security, and on the responsible use of AI in the military domain. (Micronesia)
  • International humanitarian law must be upheld, and weapons which “kill randomly” must be banned. (Austria)

Human rights in the digital space

  • The values of freedom, democracy, and human rights are threatened by the abuse of digital technologies. (European Union)
  • Emerging technologies, particularly AI, pose significant risks to human rights, requiring a move toward multilateral and ethical governance. (Ecuador)

Disinformation and misinformation

  • The rise of disinformation is among the challenges our world is confronted with. (Haiti, Montenegro). Technology makes it easier to disseminate disinformation and sow seeds of division. (Ghana)
  • The values of freedom, democracy, and human rights are threatened by disinformation. (European Union)
  • Authoritarian states are manipulating large language models so that chatbots answer in the voice of their propaganda. (United Kingdom)

Digital technologies for development

Digital inclusion and access

  • Access to technologies, especially in the digital and artificial intelligence era, is a decisive factor for promoting sustainable development. (Cameroon)
  • Technologies of the future should be embraced as an opportunity for growth, innovation, and a sustainable future. (Norway)
  • Digital access programmes seek to narrow the technological divide for millions. (United Kingdom)
  • A move towards multilateral and ethical governance of AI is necessary to avoid new digital gaps. (Ecuador)
  • The national growth and development plan focuses on developing the digital sector and enhancing youth employment. (Gabon)
  • Digital transformation is a catalyst for sustainable development. Partnerships with developed countries in advancing technology are welcomed. (Eswatini) Global partners are invited to invest in technology. (Botswana)
  • There is a desire for a modern state able to invest in people, development, technology, and education. (State of Palestine)
  • Existing inequalities mean that only some are at the frontier of digital technologies. (Dominica) There are widening technological inequalities and unequal access to technology. (Rwanda)

Digital public infrastructure and services

  • There is potential for cooperation in digital connectivity, with Azerbaijan leading initiatives like the Digital Silk Way, which includes plans for an advanced fibre optic cable network under the Caspian Sea. (Azerbaijan)
  • Uganda is deploying digital health solutions to improve service delivery and accountability. (Uganda)

Technology transfers

  • A lack of technological transfer is a major challenge. (Equatorial Guinea)
  • Technology transfers are advocated for. (Ecuador)
  • Support is required in technology transfer to address the intertwined challenges of development and environmental stability. (South Sudan)
  • No state should be locked out of opportunities for growth, finance, and technology due to geographical circumstances. (Ethiopia)
  • A call for increased long-term concessional financing, technology transfer, and fairer trade terms is made to support domestic development efforts. (Tanzania)
  • A call is made for fairer global governance, including equal access to financing for green technologies. (Chad)

For other topics discussed, head over to our dedicated UNGA80 page, where you can explore more insights from the General Debate.

Diplo NEWS25 Insta UNGA
The General Debate at the 80th session of the UN General Assembly brings together high-level representatives from across the globe to discuss the most pressing issues of our time. The session took place against the backdrop of the UN’s 80th anniversary, serving as a moment for both reflection and a forward-looking assessment of the organisation’s role and relevance.

Digital on Day 2 of UNGA80: Calls for digital inclusion, responsible AI, and collective security

 Lighting, Stage, Purple, Electronics, Screen, Computer Hardware, Hardware, Monitor, Light, Urban, Indoors, Club

Welcome to the second daily report from the General Debate at the 80th session of the UN General Debate (UNGA80). Our daily hybrid AI–human reports bring you a concise overview of how world leaders are framing the digital future.

Day 2 debates underscored a need to align rapid technological change with global governance, with countries calling for stronger international cooperation and responsible approaches to the development and use of technology. Delegations emphasised that digital technologies must serve humanity – advancing development, human rights, and democracy – while also warning of the growing security risks posed by AI misuse, disinformation, hybrid warfare, and cyber threats. Alongside some calls for rules and ethical standards, many highlighted the importance of inclusion, investment in digital infrastructure, and ensuring that all states can share in the benefits of the digital age.

To keep the highlights clear and accessible, we leave them in bullet points — capturing the key themes and voices as they emerge.


Global digital governance and cooperation

  • Technological disruptions are currently outpacing governance. (Kenya)
  • The digital age must be guided by international cooperation, ethical standards, and respect for human rights, with technology placed at the service of humanity. (Albania)
  • The world needs a strong and effective UN system capable of responding to the rapid evolution of new technologies. (Czechia) A renewed UN can strengthen digital security and international cooperation with ethical and inclusive principles that support freedom of expression. (Panama)
  • Albania is co-leading with Kenya the review process of the World Summit on the Information Society (WSIS) and will work to ensure a successful outcome. (Albania)
  • International Geneva can make a unique contribution to the attainment of global goals, leveraging its expertise in humanity and innovation as a centre for reflection, discussion, and concerted action. (Switzerland)

Artificial intelligence

Responsible AI (governance)

  • AI must serve human dignity, development, and human rights, and not the other way around. (Estonia)
  • AI governance is seen as one of three significant global challenges facing the international community, along with nuclear weapons and the triple planetary crisis. (Costa Rica)
  • Governments should act swiftly to create regulations that make AI safer and more beneficial for people. Focus should be placed on developing AI  responsibly, not halting progress. (Latvia)
  • A responsible approach from all international institutions, the private sector, and governments is needed to steer the AI revolution. (Slovakia)
  • Regulations, ethical standards, and governance mechanisms are urgently needed in the AI space, to address issues of equity and access. (Guyana)
  • A global standard is called for to ensure the use of AI is transparent, fair, and respects ethical boundaries, without substituting for human judgment or responsibility. (Namibia)
  • The UN General Assembly’s decision to establish two global AI governance mechanisms – the independent international scientific panel and a global dialogue on AI governance –  is welcomed. (Guyana, Costa Rica)

AI for development and growth

  • AI can accelerate progress on the 2030 Agenda if directed towards a fair and equitable digital transformation. (Spain) It can strengthen national economies and collective efforts for development, optimising resources, accelerating medical research, and democratising access to knowledge. (Costa Rica) AI can also promote economic growth, drive scientific progress and innovation, improve healthcare, and make education more accessible. (Latvia) 
  • AI and digitisation can accelerate the demand for energy. (Guyana)
  • Investment is needed in new technologies and artificial intelligence to help developing countries transition to a more prosperous future. (Congo)
  • AI must stand for ‘Africa included‘. (Nigeria)
  • An AI hub for sustainable development is being opened, involving hundreds of African startups in the development of artificial intelligence. (Italy)
  • A neutral sovereign artificial intelligence zone has been proposed. (Sri Lanka)
  • Guyana is establishing an AI hyperscale data centre which will help accelerate digitalisation and improve competitiveness. (Guyana)
  • Equipping citizens with the skills to use AI wisely and responsibly is essential. Estonia is implementing a new ‘Artificial Intelligence Leap’ to provide the best technological tools to students and teachers to maintain a comparative edge in education. (Estonia)

Digital tech, peace and security 

  • Concerns were raised about the impact of drones – with or without AI – on peace and security. The proliferation of drones available to countries with limited resources or non-state actors presents a rapidly evolving security threat, having increased the lethality and changed the economics of war. (Croatia, Latvia, Ukraine)
  • Acts of hybrid warfare include disinformation campaigns, attempts to undermine public trust, cyberattacks, and acts of sabotage carried out by mercenaries recruited online. (Czechia) Damage to undersea cables and GPS jamming are also part of a growing wave of hybrid attacks. (Latvia)
  • Emerging threats such as cyberattacks, hybrid attacks, and the misuse of AI (for instance to spread disinformation or enable attacks on critical infrastructure) challenge international peace, security, and stability. Countering these requires resilience and increased cooperation. (Latvia, Costa Rica)
  • Technologies like AI, cyber capabilities, space technology and robotics can strengthen defences, but can also be misused by hostile actors. Security needs to be rethought, nationally and globally. Rules, safeguards, and cooperation must keep pace with innovation in technologies, to ensure that they can contribute to resilience and stability. The UN must evolve to be able to effectively address such complex challenges. (Croatia, Cyprus)
  • There is an urgent need for global rules on how AI can be used in weapons, comparable in urgency to preventing nuclear weapons proliferation. (Ukraine)
  • Military automation, enabled by AI, challenges the ability to maintain meaningful human control over life-or-death decisions without adequate regulatory frameworks. The conclusion of a legally binding instrument before 2026 is urged to establish prohibitions and regulations for autonomous weapons systems capable of identifying, selecting, and attacking targets without meaningful human control, stressing that no algorithm should make life or death decisions. (Costa Rica)
  • The arms race is resuming, including in cyberspace. (Senegal) Cybercrime and cyber terrorism are emerging challenges. (Guyana)

Human rights in the digital space

  • Safeguarding digital rights and advancing media freedom are critical for advancing democracy and protecting international law-based multilateral world order. (Estonia)
  • It is proposed to establish a global charter for digital governance and ethical AI to protect human rights in the digital sphere. (Central African Republic)

Disinformation and misinformation

  • Concern was expressed about an emerging generation that grows cynical because it believes nothing and trusts less, due to the rapid advancement of technology. (Nigeria)
  • The ‘pandemic’ of misinformation and disinformation is an emerging challenge. (Guyana)
  • The proliferation of misinformation, particularly via digital platforms, has fuelled distrust between countries, targeting elections, trade negotiations, and public sentiment. (Serbia)
  • Disinformation, which gains even greater volume in digital environments, is eroding public trust and is part of the challenges testing the principles of the UN Charter and the UN’s authority. (Dominican Republic; Sierra Leone)
  • Autocracies are deploying new technology to undermine trust in democracy, institutions, and each other. (Australia)

Digital technologies for development

Digital inclusion and access

  • Ensuring that every person and country benefits from the opportunities of the
    digital age is a global challenge. The international community must work together to close the digital gap between states that can and cannot benefit from digital tech and AI as development tools. (Sri Lanka)
  • There is a need for a new dialogue to promote a level of access to technology that allows emerging economies to more quickly close the wealth and knowledge gap. (Nigeria)
  • The digital divide must be closed. (Costa Rica, Nigeria) Advancing digital inclusion and the digital transition is essential for states to meet development goals. (Comoros, Kiribati) 
  • A dedicated initiative is advocated for, bringing together researchers, the private sector, government, and communities to close the digital divide. (Nigeria)
  • Investments are made in digital transformation and the digital economy to foster inclusion and innovation, and ensure no one is left behind. (Albania, Sierra Leone)

Digital public infrastructure and services

  • Digital solutions are vital for overcoming challenges from geographical isolation and limited economies of scale, and are key to enhancing public services, education, commerce, and climate resilience. (Kiribati)
  • The GovStack initiative, co-founded by Estonia in collaboration with the International Telecommunications Union and Germany, provides governments with a
    digital public infrastructure toolbox aimed at modernising digital services by creating a modular, open-source, and scalable framework. (Estonia)
  • Digitalisation is a part of the commitment to sustainable development and the 2030 Agenda goals. (Serbia)
  • Digital democracy is a national aim. (Sri Lanka)

Technology transfers, trade, and critical minerals

  • Many countries need technology transfers and capacity building (Guatemala), and developed countries must honour their commitments in these areas. (Sierra Leone) Solidarity, translated into technology transfers and other measures, is needed. (Dominican Republic)
  • The world urgently needs supply chains that are more reliable, diversified, and resilient. (Paraguay)
  • Allowing critical infrastructure to depend on authoritarian regimes is gambling with both the economy and democracy. (Paraguay)
  • Africa has an abundance of critical minerals that will drive the technologies of the future. Investments in the exploration, development, and processing of these minerals in Africa will diversify supply to the international market and help shape the architecture for peace and prosperity. Countries that host minerals must benefit from them through investment, partnership, local processing, and jobs. (Nigeria)

For other topics discussed, head over to our dedicated UNGA80 page, where you can explore more insight from the General Debate.

Diplo NEWS25 Insta UNGA
The General Debate at the 80th session of the UN General Assembly brings together high-level representatives from across the globe to discuss the most pressing issues of our time. The session took place against the backdrop of the UN’s 80th anniversary, serving as a moment for both reflection and a forward-looking assessment of the organisation’s role and relevance.

Digital on Day 1 of UNGA80

 Lighting, Stage, Purple, Electronics, Screen, Computer Hardware, Hardware, Monitor, Light, Urban, Indoors, Club

Welcome to the first daily report from the General Debate at the 80th session of the UN General Debate (UNGA80). Our daily hybrid AI–human reports bring you a concise overview of how world leaders are framing the digital future.

Day 1 debates circled around a central message: technology must remain a servant of humanity, not its master. From calls to ensure AI benefits all societies and to build universal guardrails for its responsible use, to concerns over cybercrime, disinformation, and the governance of critical minerals, delegations stressed the urgent need for cooperation, inclusivity, and safeguards.

While opportunities for innovation, development, and peace were highlighted, speakers warned that without global frameworks, the same technologies could deepen divides, fuel insecurity, and erode human dignity.

To keep the highlights clear and accessible, we leave them in bullet points — capturing the key themes and voices as they emerge.


Tech for humanity and common good & global cooperation

  • Technology must be put at the service of humanity. It must be our servant, not our master. (UN Secretary-General)
  • The use of technology and global connectivity is too often twisted by cynical leaders and warmongering regimes, but can be harnessed for the common good. (Slovenia)
  • A vision of AI for all is needed to ensure that tech advancements contribute to the universal values of humanity. (Republic of Korea)
  • Africa must play an active role in defining international roles and standards and ensuring that technology is at the service of humanity. (Mozambique)
  • The international community must ensure that technology lifts up humanity and no country is locked out of the digital future. (UN Secretary-General)
  • Peak technology is picking up pace, opening horizons of opportunity but paving the way for dangerous forces because they are not regulated. New risks are posed by AI, cyber, space and quantum technologies, and while common frameworks exist, they have been weakened or outpaced. Existing rules and institutions need to be consolidated, and frameworks for peace need to be built. (France)

Artificial intelligence

AI inclusion and capacity building

  • AI capacity gaps must be closed. All countries and societies must be able to use, design and develop AI, and benefit from the opportunities the technology offers. (Türkiye, Kazakhstan, Uzbekistan, UN Secretary-General) )
  • AI technologies should be used for the benefit of humanity, not as a new tool of domination. The UN Technology Bank for the Least Developed Countries could play a critical role in closing the digital and technological gap. (Türkiye)
  • A new international cooperation mechanism is proposed to facilitate the exchange of practical solutions and models of AI in healthcare, education, and culture. (Uzbekistan)
  • Not taking advantage of AI means wasting economic opportunities. Countries need to adapt to the challenges imposed by the need to use AI responsibly. (Morocco)

Responsible AI (governance)

  • The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secretary-General) There is a need for universal guardrails, common standards, and ethical norms to ensure transparency, safety, accountability, fairness, and the protection of individual rights in its deployment. The UN’s recent steps to establish an international scientific panel and an annual global dialogue on AI governance are supported. (UN Secretary-General, Kazakhstan)
  • Commitment was expressed to building multilateral governance to mitigate the risks of AI, in line with the Global Digital Compact. (Brazil)
  • AI could lead to a dystopia of deepening polarisation, inequality, and human rights abuses if not proactively managed. It can also be a driving force for innovation, prosperity, and direct democracy. (Republic of Korea)
  • Artificial intelligence poses new challenges to human dignity, justice, and labor, with risks of exclusion, social manipulation, and militarization through autonomous weapons. Addressing them requires understanding how AI works and having robust safeguards in place. (Mozambique)

Cybersecurity and cybercrime

  • Digital technologies come with new security threats, in particular cybercrime. Cybersecurity must be an important component of collective security. (Tajikistan)
  • Viet Nam looks forward to the signing ceremony of the UN Convention Against Cybercrime. (Viet Nam)

Digital technology, peace and security

  • There are risks associated with new technologies, from biotech to autonomous weapons. There is also a rise of tools for mass surveillance and control, which can intensify the race for critical minerals and potentially spark instability. (UN Secretary-General)
  • The US will pioneer an AI verification system to enforce the biological weapons convention. (United States)
  • Digital, space and AI technologies should be used as forces for peace, not tools for domination. (Portugal)
  • The use of ICTs to harm peace, security and sustainable development needs to be prevented. (Turkmenistan)

Human rights in the digital space

  • Technology must serve humanity and be a force for good. It must promote human rights, human dignity, and human agency. (UN Secretary-General) 
  • Regulating digital platforms does not mean restraining freedom of expression, but ensuring that what is illegal offline is also illegal online. (Brazil)

Disinformation and misinformation

  • Digital platforms offer possibilities for people to come together, but they have also been used for sowing intolerance, misogyny, xenophobia, and misinformation, necessitating government regulation to protect the vulnerable. (Brazil)
  • The rise of tools for mass disruption and mass social control is a concern. ( UN Secretary-General)
  • There’s a growing challenge of disinformation being used to undermine democratic institutions and destabilize societies. The international community needs to defend truth as a supreme value. (Lithuania)

Digital inclusion and tech for development

  • To bridge the digital and technological divides is central to building resilient societies. (Portugal)
  • It is important to prevent inequalities in digital development and the use of artificial intelligence between countries. (Uzbekistan)
  • Digital transformation must be balanced, reflect the realities and legitimate interests of all states, and be free from politicization and bias. A proposal will be made to establish a world platform on digital integration. (Turkmenistan)
  • There is a need for a technological and a climate diplomacy that can regulate risks and democratise benefits through genuine transfer and sharing of technology and knowledge, so that technology is a factor of inclusive development. (Mozambique)
  • Sustainable development models need to be based on digital and green transition. For this, countries must invest in R&D, train human resources, develop green infrastructure, and formulate national plans, while developed countries must take responsibility in sharing and transferring technology to developing and underdeveloped countries. (Viet Nam)
  • Nations which benefited the most from industrial and economic development in the past should support developing countries through measures such as technology transfers and adequate financing. (Angola)

Critical minerals

  • Robust regulations need to balance responsible mineral extraction with effective environmental protection. (Nauru)
  • Rich countries are demanding greater access to resources and technology. The race for critical minerals cannot repeat the predatory and asymmetrical logic of past centuries. (Brazil)
  • Critical minerals need to be harnessed for inclusive growth and sustainable development, including within the communities where these minerals are extracted from. (South Africa)
  • The governance of strategic minerals needs to ensure that exploitation compiles with the principles of sustainable development,  economic sovereignty and people’s well-being. (Democratic Republic of the Congo)

For other topics discussed, head over to our dedicated UNGA80 page, where you can explore more insight from the General Debate.

Diplo NEWS25 Insta UNGA

The General Debate at the 80th session of the UN General Assembly brings together high-level representatives from across the globe to discuss the most pressing issues of our time. The session took place against the backdrop of the UN’s 80th anniversary, serving as a moment for both reflection and a forward-looking assessment of the organisation’s role and relevance.

Weekly #230 Nepal’s Discord democracy: How a banned platform became a ballot box

 Logo, Text

12 – 19 September 2025


HIGHLIGHT OF THE WEEK

In a historic first for democracy, a country has chosen its interim prime minister via a messaging app. 

In early September, Nepal was thrown into turmoil after the government abruptly banned 26 social media platforms, including Facebook, YouTube, X, and Discord, citing failure to comply with registration rules. The move sparked outrage, particularly among the country’s Gen Z, who poured into the streets, accusing officials of corruption. The protests quickly turned deadly.

Within days, the ban was lifted. Nepalis turned to Discord to debate the country’s political future, fact-check rumours and collect nominations for the country’s future leaders. On 12 September, the Discord community organised a digital poll for an interim prime minister, with former Supreme Court Chief Justice Sushila Karki emerging as the winner

Karki was sworn in the same evening. On her recommendation, the President has dissolved parliament, and new elections have been scheduled for 5 March 2026, after which Karki will step down.

 Adult, Male, Man, Person, Art, Graphics, Face, Head, People

However temporary or symbolic, the episode underscored how digital platforms can become political arenas when traditional ones falter. When official institutions lose legitimacy, people will instinctively repurpose the tools at their disposal to build new ones. 


IN OTHER NEWS THIS WEEK

TikTok ban deadline extended to December 2025 as sale negotiations continue

The TikTok saga entered what many see as yet another act in a long-running drama. In early 2024, the US Congress, citing national security risks, passed a law demanding that ByteDance, TikTok’s Chinese parent company, divest control of the app or face a ban in the USA. The law, which had bipartisan support in Congress, was later upheld by the Supreme Court.

A refresher. The US government has long argued that the app’s access to US user data poses significant risks. Why? TikTok is a subsidiary of ByteDance, which is a private Chinese company, possibly subject to the Chinese 2017 National Intelligence Law, which requires any Chinese entity to support, assist, and cooperate with state intelligence work – including, possibly, the transfer of US citizens’ TikTok data to China. On the other hand, TikTok and ByteDance maintain that TikTok operates independently and respects user privacy.

However, the administration under President Trump has been repeatedly postponing enforcement via executive orders.

Economic and trade negotiations with China have been central to the delay. As the fourth round of talks in Madrid coincided with the latest deadline, Trump opted to extend the deadline again — this time until 16 December 2025 — giving TikTok more breathing room. 

The talks in Madrid have revolved around a potential ‘framework deal’ under which TikTok would be sold or restructured in a way that appeases US concerns, but retain certain ‘Chinese characteristics.’

What do officials say is in the deal? 

  • TikTok’s algorithm: According to Wang Jingtao, deputy director of China’s Central Cyberspace Affairs Commission, there was consensus on authorisation of ‘the use of intellectual property rights such as (TikTok’s) algorithm’ — a main sticking point in the deal.
  • US user data: According to Wang Jingtao, the sides also agreed on entrusting a partner with handling US user data and content security.

What else is reported to be in the deal?

  • A new recommendation algorithm licensed from TikTok parent ByteDance
  • Creating a new company to run TikTok’s US operations and/or creating a new app for US users to move to
  • A consortium of US investors, including Oracle, Silver Lake, and Andreessen Horowitz, would own 80% of the business, with 20% held by Chinese shareholders.
  • The new company’s board would be mostly American, including one member appointed by the US government.

Trump himself stated that he will speak with Chinese President Xi Jinping on Friday to possibly finalise the deal.

If finalised, this deal could establish a new template for how nations manage foreign technology platforms deemed critical to national security.


China’s counterpunch in the chip war

While TikTok grabs headlines as the most visible symbol of the USA–China digital rivalry, the more consequential battle may be unfolding in the semiconductor sector. Just as Washington extends the deadline for TikTok’s divestiture, Beijing has opened a new line of attack: an anti-dumping probe into US analogue chips.  

Announced by China’s Ministry of Commerce, the probe accuses US firms of ‘lowering and suppressing’ prices in ways that hurt domestic producers. It covers legacy chips built on older 40nm-plus process nodes — not the cutting-edge AI accelerators that dominate geopolitical debates, but the everyday workhorse components that power smart appliances, industrial equipment, and automobiles. These mature nodes account for a massive share of China’s consumption, with US firms supplying more than 40% of the market in recent years.

For China’s domestic industry, the probe is an opportunity. Analysts say it could force foreign suppliers to cede market share to local firms concentrated in Jiangsu and other industrial provinces. At the same time, there are reports that China is asking tech companies to stop purchasing Nvidia’s most powerful processors. And speaking of Nvidia, the company is in the crosshairs again, as China’s State Administration for Market Regulation (SAMR) issued a preliminary finding that Nvidia violated antitrust law linked to its 2020 acquisition of Mellanox Technologies. Depending on the outcome of the investigation, Nvidia could face penalties.

Meanwhile, Washington is tightening its own grip. The USA will require annual license renewals for South Korean firms Samsung and SK Hynix to supply advanced chips to Chinese factories — a reminder that even America’s allies are caught in the middle. 

Last month, the US government acquired a 10% stake in Intel. This week, Nvidia announced a $5 billion investment in Intel to co-develop custom chips with the company. Together, these moves reflect Washington’s broader push to reinforce semiconductor leadership amid competition from China.


UK and USA sign Tech Prosperity Deal

The USA and the UK have signed a Technology Prosperity Deal to strengthen collaboration in frontier technologies, with a strong emphasis on AI, quantum, and the secure foundations needed for future innovation.

On AI, the deal expands joint research programs, compute access, and datasets in areas like biotechnology, precision medicine, fusion, and space. It also aligns policies, strengthens standards, and deepens ties between the UK AI Security Institute and the US Center for AI Standards and Innovation to promote secure adoption.

On quantum, the countries will establish a benchmarking task force, launch a Quantum Code Challenge to mobilise researchers, and harness AI and high-performance computing to accelerate algorithm development and system readiness. A US-UK Quantum Industry Exchange Program will spur adoption across defence, health, finance, and energy.

The agreement also reinforces foundations for innovation, including research security, 6G development, resilient telecoms and navigation systems, and mobilising private capital for critical technologies.

The deal was signed during a state visit by President Trump to the UK. Also present: OpenAI’s Sam Altman, Nvidia’s Jensen Huang, Microsoft’s Satya Nadella, and Apple’s Tim Cook. 

Microsoft pledged $30bn over four years in the UK, its largest-ever UK commitment. Half will go into capital expenditure for AI and cloud datacentres, the rest into operations like research and sales. 

Nscale, OpenAI and Nvidia will develop a platform that will deploy OpenAI’s technology in the UK. Nvidia will channel £11bn in value into UK AI projects by supplying up to 120,000 Blackwell GPUs, data centre builds, and supercomputers. It is also directly investing £500m in Nscale. 

‘This is the week that I declare the UK will be an AI superpower’, Jensen Huang told BBC News

Missing from the deal? The UK’s Digital Services Tax (DST), which remains set at 2% and was previously reported to be part of the negotiations, along with copyright issues linked to AI training.


The digital playground gets a fence and a curfew

In response to rising concerns over the impact of AI and social media on teenagers, governments and tech companies are implementing new measures to enhance online safety for young users.

Australia has released its regulatory guidance for the incoming nationwide ban on social media access for children under 16, effective 10 December 2025. The legislation requires platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms must detect and remove underage accounts, communicating clearly with affected users. Platforms are also expected to block attempts to re-register. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

French lawmakers are proposing stricter regulations on teen social media use, including mandatory nighttime curfews. A parliamentary report suggests that social media accounts for 15- to 18-year-olds should be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues. This proposal follows concerns about the psychological impact of platforms like TikTok on minors. 

In the USA, the Federal Trade Commission (FTC) has launched an investigation into the safety of AI chatbots, focusing on their impact on children and teenagers. Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships. Not long after, grieving parents have testified before the US Congress, urging lawmakers to regulate AI chatbots after their children died by suicide or self-harmed following interactions with these tools. 

OpenAI has introduced a specialised version of ChatGPT tailored for teenagers, incorporating age-prediction technology to restrict access to the standard version for users under 18. Where uncertainty exists, it will assume the user is a teenager. If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities. This initiative aims to address growing concerns about the mental health risks associated with AI chatbots, while also raising concerns related to issues such as privacy and freedom of expression. 

The intentions are largely good, but a patchwork of bans, curfews, and algorithmic surveillance just underscores that the path forward is unclear. Meanwhile, the kids are almost certainly already finding the loopholes.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow.

The Human Rights Council

The Human Rights Council discussed a report on the human rights implications of new and emerging technologies in the military domain on 18 September. Prepared by the Human Rights Council Advisory Committee, the report recommends, among other measures, that ‘states and international organizations should consider adopting binding or other effective measures to ensure that new and emerging technologies in the military domain whose design, development or use pose significant risks of misuse, abuse or irreversible harm – particularly where such risks may result in human rights violations – are not developed, deployed or used’.

WTO Public Forum 2025

WTO’s largest outreach event, the WTO Public Forum, took place from 17 to 18 September under the Theme ‘Enhance, Create and Preserve’. Digital issues were high on the agenda this year, with sessions dedicated to AI and trade, digital resilience, the moratorium on customs duties on electronic transmissions, and e-commerce negotiations, for example. Other issues were also salient, such as the uncertainty created by rising tariffs and the need for WTO reform. During the Forum, the WTO launched the 2025 World Trade Report, under the title ‘Making trade and AI work together to the benefit of all’. The report explores AI’s potential to boost global trade, particularly through digitally deliverable services. It argues that AI can lower trade costs, improve supply-chain efficiency, and create opportunities for small firms and developing countries, but warns that without deliberate action, AI could deepen global inequalities and widen the gap between advanced and developing economies.

CSTD WG on data governance

The third meeting of the UN CSTD on data governance (WGDG) took place on 15-16 September. The focus of this meeting was on the work being carried out in the four working tracks of the WGDG: 1. principles of data governance at all levels; 2. interoperability between national, regional and international data systems; 3. considerations of sharing the benefits of data; 4. facilitation of safe, secure and trusted data flows, including cross border data flows.

WGDG members reviewed the synthesis reports produced by the CSTD Secretariat, based on the responses to questionnaires proposed by the co-facilitators of working tracks. The WGDG decided to postpone the deadline for contributions to 7 October. More information can be found in the ‘call for contributions’ on the website of the WGDG.


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next two weeks at the UN will be packed with high-level discussions on advancing digital cooperation and AI governance. 

The general debate, from 23 to 29 September, will gather heads of state, ministers, and global leaders to tackle pressing challenges—climate change, sustainable development, and international peace—under the theme ‘Better together: 80 years and more for peace, development and human rights.’ Diplo and the Geneva Internet Platform will track digital and AI-related discussions using a hybrid of expert analysis and AI tools, so be sure to bookmark our dedicated web page.

On 22 September, the UN Office for Digital and Emerging Technologies (ODET) will host Digital Cooperation Day, marking the first anniversary of the Global Digital Compact. Leaders from government, the private sector, civil society, and academia will explore inclusive digital economies, AI governance, and digital public infrastructure through panels, roundtables, and launches.

On 23 September, ITU and UNDP will host Digital@UNGA 2025: Digital for Good – For People and Prosperity at UN Headquarters. The anchor event will feature high-level discussions on digital inclusion, trust, rights, and equity, alongside showcases of initiatives such as the AI Hub for Sustainable Development. Complementing this gathering, affiliate sessions throughout the week will explore future internet governance, AI for the SDGs, digital identity, green infrastructure in Africa, online trust in the age of AI, climate early-warning systems, digital trade, and space-based connectivity. 

A major highlight will be the launch of the Global Dialogue on AI Governance on 25 September. Set to have its first meeting in 2026 along with the AI for Good Summit in Geneva, the dialogue’s main task – as decided by the UN General Assembly – is to facilitate open, transparent and inclusive discussions on AI governance.



READING CORNER
Origins of AI 1

Ever wonder how AI really works? Discover its journey from biological neurons to deep learning and the breakthrough paper that transformed modern artificial intelligence.

ai hallucinations cover chatgpt

Hallucinations in AI can look like facts. Learn how flawed incentives and vague prompts create dangerous illusions.

Weekly #229 Von der Leyen declares Europe’s ‘Independence Moment’

 Logo, Text

5 – 12 September 2025


Dear readers,

‘Europe is in a fight,’ European Commission President Ursula von der Leyen declared as she opened her 2025 State of the Union speech. Addressing the European Parliament in Strasbourg, von der Leyen noted that ‘Europe must fight. For its place in a world in which many major powers are either ambivalent or openly hostile to Europe.’ In response, she argued for Europe’s ‘Independence Moment’ – a call for strategic autonomy.

One of the central pillars of her plan? A major push to invest in digital and clean technologies. Let’s explore the details we’ve heard in the speech.

 Book, Comics, Publication, Adult, Female, Person, Woman, Clothing, Coat, Face, Head

The EU plans measures to support businesses and innovation, including a digital Euro and an upcoming omnibus on digital. Many European startups in key technologies like quantum, AI, and biotech seek foreign investment, which jeopardises the EU’s tech sovereignty, the speech notes. In response, the Commission will launch a multi-billion-euro Scaleup Europe Fund with private partners. 

The Single Market remains incomplete, von der Leyen noted, mostly in three domains: finance, energy, and telecommunications. A Single Market Roadmap to 2028 will be presented, which will provide clear political deadlines.

Standing out in the speech was von der Leyen’s defence of Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US administration.

The EU needs ‘a European AI’, von der Leyen noted. Key initiatives include the Cloud and AI Development Act, the Quantum Sandbox, and the creation of European AI Gigafactories to help startups develop, train, and deploy next-generation AI models. 

Additionally, CEOs of Europe’s leading tech companies will present their European AI & Tech Declaration, pledging to invest in and strengthen Europe’s tech sovereignty, von der Leyen stated.

Europe should consider implementing guidelines or limits for children’s social media use, von der Leyen noted. She pointed to Australia’s pioneering social media restrictions as a model under observation, indicating that Europe could adopt a similar approach. To ensure a well-informed and balanced policy, she announced plans to commission a panel of experts by the end of the year to advise on the best strategies for Europe.

Von der Leyen’s bet is that a potent mix of massive investment, streamlined regulation, and a unified public-private front can finally stop Europe from playing catch-up in the global economic race.

History is on her side in one key regard: when the EU and corporate champions unite, they win big on setting global standards, and GSM is just one example. But past glory is no guarantee of future success. The rhetoric is sharp, and the stakes are existential. Now, the pressure is on to deliver more than just a powerful speech.


IN OTHER NEWS THIS WEEK

The world’s eyes turned to Nepal this week, where authorities banned 26 social media platforms for 24 hours after nationwide protests, led largely by youth, against corruption. According to officials, the ban was introduced in an effort to curb misinformation, online fraud, and hate speech. The ban has been lifted after the protests intensified and left 22 people dead. The events are likely to offer lessons for other governments grappling with the role of censorship during times of unrest.

Another country fighting corruption is Albania, using unusual means – the government made a pioneering move by introducing the world’s first AI-powered public official, named Diella. Appointed to oversee public procurement, the virtual minister represents an attempt to use technology itself to create a more transparent and efficient government, with the goal of ensuring procedures are ‘100% incorruptible.’ A laudable goal, but AI is only as unbiased as the data and algorithms it’s relying on. Still, it’s a daring first step. 

Speaking of AI (and it seems we speak of little else these days), another nation is trying its best to adapt to the global transformation driven by rapid digitalisation and AI. Kazakhstan has announced an ambitious goal: to become a fully digital country within three years.

The central policy is the establishment of a new Ministry of Artificial Intelligence and Digital Development, which will ensure the total implementation of AI to modernise all sectors of the economy. This effort will be guided by a national strategy called ‘Digital Kazakhstan’ to combine all digital initiatives.

A second major announcement was the development of Alatau City, envisioned as the country’s innovation hub. Planned as the region’s first fully digital city, it will integrate Smart City technologies, allow cryptocurrency payments, and is being developed with the expertise of a leading Chinese company that helped build Shenzhen.

Has Kazakhstan bitten off more than it can chew in 3 years’ time? Even developing a national strategy can take years; implementing AI across every sector of the economy is exponentially more complex. Kazakhstan has dared to dream big; now it must work hard to achieve it.

AI’s ‘magic’ comes with a price. Authors sued Apple last Friday for allegedly training its AI on their copyrighted books. In a related development, AI company Anthropic agreed to a massive $1.5 billion settlement for a similar case – what plaintiffs’ lawyers are calling the largest copyright recovery in history, even though the company admitted no fault. Will this settlement mark a dramatic shift in how AI companies operate? Without a formal court ruling, it creates no legal precedent. For now, the slow grind of the copyright fight continues.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow. 

At the International Telecommunication Union (ITU), the Council Working Group (CWG) on WSIS and SDGs met on Tuesday and Wednesday to look at the work undertaken by ITU with regard to the implementation of WSIS outcomes and the Agenda 2030 and to discuss issues related to the ongoing WSIS+20 review process.

As we write this newsletter, the Expert Group on ITRs is working on the final report it needs to submit to the ITU Council in response to the task it was given to review the International Telecommunication Regulations (ITRs), considering evolving global trends, tech developments, and current regulatory practices.

A draft version of the report notes that members have divergent views on whether the ITRs need revision and even on their overall relevance; there also doesn’t seem to be a consensus on whether and how the work on revising the ITRs should continue. On another topic, the CWG on international internet-related public policy issues is holding an open consultation on ensuring meaningful connectivity for landlocked developing countries. 

Earlier in the week, the UN Institute for Disarmament Research (UNIDIR) hosted the Outer Space Security Conference, bringing together diplomats, policy makers, private actors, experts from the military sectors and others to look at ways in which to shape a secure, inclusive and sustainable future for outer space.

Some of the issues discussed revolved around the implications of using emerging technologies such as AI and autonomous systems in the context of space technology and the cybersecurity challenges associated with such uses. 


IN CASE YOU MISSED IT
UN Cyber Dialogue 2025 web
www.diplomacy.edu

The session brought together discussants to offer diverse perspectives on how the OEWG experience can inform future global cyber negotiations.

African priorities for GDC
www.diplomacy.edu

African priorities for the Global Digital Compact In 2022 the idea of a Global Digital Compact was floated by the UN with the intention of developing shared


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next meeting of the UN’s ‘Multi-Stakeholder Working Group on Data Governance’ is scheduled for 15-16 September in Geneva and is open to observers (both onsite and online).

In a recent event, experts from Diplo, the Open Knowledge Foundation (OKFN), and the Geneva Internet Platform analysed the Group’s progress and looked ahead to the September meeting. Catch up on the discussion and watch the full recording.

The 2025 WTO Public Forum will be held on 17–18 September in Geneva, and carries the theme ‘Enhance, Create, and Preserve.’ The forum aims to explore how digital advancements are reshaping global trade norms.

The agenda includes sessions that dig into the opportunities posed by e-commerce (such as improving connectivity, opening pathways for small businesses, and increasing market inclusivity), but also shows awareness of the risks – fragmentation of the digital space, uneven infrastructure, and regulatory misalignment, especially amid geopolitical tensions. 

The Human Rights Council started its 60th session, which will continue until 8 October. A report on privacy in the digital age by OHCHR will be discussed next Thursday, 18 September. It looks at challenges and risks with regard to discrimination and the unequal enjoyment of the right to privacy associated with the collection and processing of data, and offers some recommendations on how to prevent digitalisation from perpetuating or deepening discrimination and exclusion.

Among these are a recommendation for states to protect individuals from human rights abuses linked to corporate data processing and to ensure that digital public infrastructures are designed and used in ways that uphold the rights to privacy, non-discrimination and equality.



READING CORNER
Crtez Monthly 102 ver II

This summer saw power plays over US chips and China’s minerals, alongside the global AI race with its competing visions. Lessons of disillusionment and clarity reframed AI’s trajectory, while digital intrusions continued to reshape geopolitics. And in New York, the UN took a decisive step toward a permanent cybersecurity mechanism. 

EU digital flag GOOD

eIDAS 2 and the European Digital Identity Wallet aim to secure online interactions, reduce bureaucracy, and empower citizens across the EU with a reliable and user-friendly digital identity.

Digital Watch newsletter – Issue 102 – July and August 2025

July-August 2025 in retrospect

 Box, Adult, Male, Man, Person, Cardboard, Carton, Package, Package Delivery, Face, Head

The digital and geopolitical landscape is shifting faster than ever—and understanding it is more important than ever. This month, our newsletter takes you behind the headlines and into the forces shaping technology, AI, and cybersecurity.

The levers of power: US chips vs China’s critical minerals – who really holds the keys to the future?

The global AI race: Rival powers, competing visions, and what it means for the future of AI.

Lessons from summer: From disillusionment to clarity: Ten insights for AI today.

Cyber frontlines: Digital intrusions are not just technical—they’re reshaping geopolitics.

UN OEWG wrap-up: A landmark step toward a permanent cybersecurity mechanism.

This summer in Geneva: Key events and takeaways shaping international digital governance.

DIGITAL GOVERNANCE

The co-facilitators for the WSIS+20 process issued the Zero Draft of the outcome document for the twenty-year review of the implementation of the World Summit on the Information Society (WSIS+20). 

At its 1 September summit in Tianjin, the Shanghai Cooperation Organisation (SCO) highlighted tech, AI, and digital governance, with a declaration stressing cyber sovereignty, inclusive AI, cybersecurity norms, and stronger digital cooperation.

ARTIFICIAL INTELLIGENCE

The European Commission has released its finalised Code of Practice for general-purpose AI models, laying the groundwork for implementing the landmark AI Act. The new Code sets out transparency, copyright, and safety rules that developers must follow before deadlines. A new phase of the EU AI Act took effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties.

Read more about this summer’s AI developments below.

TECHNOLOGIES

France and Germany announced a joint Economic Agenda, committing to joint efforts in AI, quantum, chips, cloud, and cybersecurity, while making digital sovereignty a central political and investment priority.

The USA, Japan, and South Korea held Trilateral Quantum Cooperation meetings to strengthen collaboration on securing emerging technologies.

The UK government unveiled its Digital and Technologies Sector Plan, aiming to grow the tech sector to £1 trillion, driven by AI, quantum computing, and cybersecurity.

Turkey’s government is preparing a long-awaited 5G frequency auction in October, with the Transport and Infrastructure Minister announcing that the first services should begin in 2026. 

Two Chinese nationals were charged in the US for illegally exporting millions of dollars’ worth of advanced Nvidia AI chips to China over the past three years. Read more about this summer’s chip developments below.

INFRASTRUCTURE

Over 70 civil society and consumer groups have issued a statement warning that proposed interconnection fees in the EU’s upcoming Digital Networks Act could undermine net neutrality, raise costs, and stifle innovation.

In the US, several public-interest groups have opted not to appeal a January 2025 court ruling that struck down the FCC’s net neutrality rules, instead pursuing alternative federal and state strategies to protect open internet access.

A €40 million Baltic Sea digital infrastructure project, backed by €15 million from the EU’s Connecting Europe Facility (CEF2), will establish four subsea cables and several hundred kilometers of terrestrial fiber, creating a ~550 km long-haul route linking Sweden, Estonia, and Finland to expand Baltic Sea connectivity.

A new lawsuit filed by Cloud Innovation has intensified AFRINIC’s ongoing governance crisis, raising fears over the potential loss of African control of the continent’s internet infrastructure.

CYBERSECURITY

Australia’s eSafety commissioner report showed that tech giants made minimal progress in combating child sexual abuse online, with some failing to track reports or staff numbers, despite legally enforceable transparency notices requiring regular reporting under Australia’s Online Safety Act. https://dig.watch/updates/eu-sets-privacy-defaults-to-shield-minors 

A leaked memo reveals that the EU debate over mandatory private message scanning has intensified, with the European Parliament threatening to block the extension of voluntary rules unless the Council agrees to mandatory chat control.

Cybersecurity researchers have uncovered PromptLock, the first known AI-powered ransomware, a proof-of-concept capable of data theft and encryption that highlights how publicly available AI tools could escalate future cyberthreats.

INTERPOL has announced that a continent-wide law enforcement initiative targeting cybercrime and fraud networks led to more than 1,200 arrests between June and August 2025. 

The Open-ended Working Group (OEWG) on the security of and in the use of ICTs wrapped up its final substantive session in July 2025 with the adoption of its long-awaited Final Report.

ECONOMIC

US President Donald Trump has officially signed the GENIUS Act into law, marking a historic step in establishing a legal framework for stablecoins in the US. 

China is weighing plans to permit yuan-backed stablecoins in an effort to promote global use of its currency.

El Salvador’s National Bitcoin Office has split the country’s bitcoin reserves into multiple new addresses to bolster security, citing potential future risks such as quantum computing.

US President Donald Trump has threatened to impose retaliatory tariffs on countries implementing digital taxes or regulations affecting American technology companies. 

China has proposed draft rules to ensure fair and transparent pricing on internet platforms selling goods and services, inviting public feedback following widespread complaints from merchants and consumers.

HUMAN RIGHTS

The UK’s new Online Safety Act has increased VPN use, as websites introduce stricter age restrictions to comply with the law. 

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention.

LEGAL

A Florida jury has ordered Tesla to pay $243 million in damages for a fatal 2019 Autopilot crash, ruling its driver-assistance software defective, which may significantly impact Tesla’s ambitions to expand its emerging robotaxi network in the USA.

The CJEU’s General Court has rejected a challenge to the EU–US Data Privacy Framework, allowing EU-to-US personal data transfers to continue without extra safeguards.

A United States federal judge has ruled against breaking up Google’s search business, instead ordering it to end exclusive deals, share data with rivals, and offer fair access to search and ad services after finding it illegally maintained its monopoly. However, the company has been hit with a 3.5 billion fine in the EU for abusing its dominance in digital advertising by giving unfair preference to its own ad exchange, AdX, in violation of EU antitrust rules.

SOCIOCULTURAL

Brazil’s Attorney General (AGU) has formally requested Meta to remove AI-powered chatbots that simulate childlike profiles and engage in sexually explicit dialogue, citing concerns that they ‘promote the eroticisation of children.’

In Nepal, mass protests erupted over a 24-hour social media ban of 26 platforms and government corruption, resulting in 19 deaths.

US President Trump called security and privacy concerns around TikTok highly overrated and said he’ll keep extending the deadline for its parent company, ByteDance, to sell its controlling stake in TikTok or face a nationwide ban.

DEVELOPMENT

The EU will require all platforms to verify users’ ages using the EU Digital Identity Wallet by 2026, with initial pilots in five countries and fines of up to €18 million or 10% of global turnover for non-compliance.

France, Germany, Italy, and the Netherlands signed the founding papers for a new European Digital Infrastructure Consortium for Digital Commons, which will focus on publicly developed and publicly usable digital programmes.

The UN Secretary-General’s July report elaborates on a voluntary Global Fund for AI, targeting $1–3 billion to support countries’ AI readiness through foundational resources, national strategies, and cooperation.
Achieving universal internet connectivity by 2030 could cost up to $2.8 trillion, an ITU–Saudi CST report warns, urging global cooperation and investment to bridge widening digital divides and connect the one-third of humanity still offline.


For years, semiconductors have been at the heart of the US–China technology rivalry, shaping trade negotiations, export controls, and national security debates. Nvidia’s latest struggle to sell its H20 chip in China is the latest chapter in a long-running standoff, highlighting how advanced technology, critical minerals, and industrial policy have become intertwined in global power politics.

The H20 chip, launched last year to help Nvidia maintain access to the Chinese market — which made up 13% of its sales in 2024 — was itself a product of geopolitics. However, in April, Washington told the company it needed a special license to export the H20 chip to China, halting shipments. The chip was believed to have powered DeepSeek, one of China’s most advanced AI models, raising US concerns about national security.

Nvidia reapplied for licenses in July and received assurances that they would be approved. Sales eventually resumed, but only after months of back-and-forth that reflected Washington’s shifting stance: In July, export controls were paused to bolster US-China trade negotiations. In August, the administration oscillated between threatening to block advanced Nvidia sales to China and signalling possible approval for modified versions

In September, sales restarted, albeit under unusual circumstances: Going forward, Nvidia will give the US government 15% of its chip revenue from China, a deal that’s largely been described as unprecedented. AMD will do the same.

The bigger picture is: China’s controls on rare earth exports became a major focus in the trade talks between Beijing and Washington this summer. Why does it matter? Because chip manufacturing relies heavily on critical minerals like germanium and gallium. The USA is heavily reliant on imports for both of these critical minerals, especially from China, given its dominant role as a major producer and supplier of both products. According to a US Mineral Commodity Summary, no domestic primary (low-purity, unrefined) gallium has been recovered since 1987, and there are no government stockpiles of the mineral. The USA does produce germanium, but as a byproduct recovery from zinc ores, not a primary product, a process that is costly. A strategic stockpile of 5 tonnes of germanium does exist, but it is a paltry number compared to China’s reported 199 tonnes of annual germanium production. (Sidenote: The numbers are, unfortunately, from 2023, but they paint a clear enough picture.)

China, meanwhile, relies on NVIDIA’s chips to stay competitive in the global AI race. Domestic alternatives are still behind in performance, efficiency, and reliability, so using NVIDIA hardware allows China to deploy cutting-edge AI solutions immediately while its homegrown industry continues to scale up.

The USA openly linked chip concessions to rare earths discussions: In exchange for increasing shipments of rare earth minerals from China, the US agreed to lift export curbs on microchip designing software, ethane and jet engines.

The interplay between chip access and mineral supply illustrates a complex trade-off: each side leverages what it has — the USA its semiconductor know-how, China its dominance in rare earth minerals.

Both countries have already tried with export controls, with mixed results. Reports surfaced that more than $1 billion worth of Nvidia chips had already reached China through alternative channels. This prompted the USA to consider embedding trackers into AI chip shipments to monitor possible diversions

Despite China’s export restrictions, germanium and gallium continue to reach the USA via indirect trade routes, likely through re-exports from countries where China permits their export.

This data underscored doubts about whether export controls could truly contain the spread of advanced technology and prompted each of the players to make moves to position themselves better and reduce their reliance on each other.

 Box, Adult, Male, Man, Person, Cardboard, Carton, Package, Package Delivery, Face, Head

The USA: Leveraging the CHIPS Act

The USA reportedly is weighing the diversion of $2 billion 2022 CHIPS and Science Act in funding toward critical minerals

Washington has also considered taking equity stakes in US chipmakers in exchange for cash grants authorised by the 2022 CHIPS and Science Act, aimed at supporting domestic semiconductor manufacturing and research. So far, the administration has signalled it will convert $8.87 billion in CHIPS Act grant money that had been awarded to Intel into 10% equity in the company. While Intel confirmed it had received a grant, officials insisted negotiations were still ongoing, underscoring the lack of clarity. The White House has denied plans to pursue similar stakes in firms like TSMC or Micron, but officials hinted that other companies could still be subject to action.

Critics argue that government ownership risks undermining global competitiveness, and some analysts question whether recent interventions — including Trump’s claim to have ‘saved Intel’ — are more political theatre than industrial strategy

Adding to the confusion, the US Commerce Department voided a $7.4 billion research grant signed under the Biden administration, further muddying the picture of America’s long-term semiconductor policy.

Tariffs are also a weapon the USA will be wielding: President Trump has said that the USA will impose a tariff of about 100% on imports of semiconductors, though companies that produce chips domestically—or have committed to do so—will be exempt. China’s Semiconductor Manufacturing International Corporation (SMIC) and Huawei are likely to be impacted. 

China: Managing Nvidia concerns while boosting local production

Beijing, meanwhile, has tried to play both offence and defence. 

Reports say that China-linked hacker group APT41 sent a malware-laden email posing as Rep. John Moolenaar, embedding malware to target U.S. trade groups, law firms, and agencies in a bid to gain insights into recommendations to the White House for the contentious trade talks. The Chinese embassy in Washington refuted the claims.

Authorities demanded Nvidia explain alleged flaws in the H20 chips, while state media went further, warning that the chips were unsafe for domestic use. Nvidia denied the accusations, stressing that its products contained no backdoors.

The country is accelerating efforts to reduce reliance on foreign suppliers: it aims to triple domestic AI chip production, while tech giants such as Alibaba are unveiling homegrown alternatives.

Other Asian players are also navigating this fractured landscape. 

In July, Malaysia’s trade ministry announced that the export, transhipment, and transit of US-origin high-performance AI chips will now require a trade permit, effective immediately. 

South Korea secured exemptions for Samsung and SK Hynix from 100% tariffs on semiconductor exports to the USA, as both companies have invested in the USA since 2022. TSMC, which is based in Taiwan (which the USA considers a part of China), also invested significantly in the USA. If they come to pass, these tariffs will be devastating for the Philippines, as about 70% of its total exports come from the semiconductor industry. Specifically, 15% of Philippine semiconductor exports—about $6 billion—are destined for the USA.

However, Washington revoked fast-track export status for Samsung, SK Hynix, TSMC, and Intel, making it harder to ship American chipmaking equipment and technology to their manufacturing plants in China. From 31 December, shipments of American-origin chipmaking tools to Chinese facilities will require US export licenses. However, the US Commerce Department is now weighing annual approvals for exports of chipmaking supplies to Samsung’s and SK Hynix’s China-based plants.

Who has the edge?

The US leads in chip design and advanced production, but its edge relies on access to critical minerals controlled by China. Beijing dominates the mineral supply but remains dependent on foreign high-end chips until domestic AI production scales up. In short, the US advantage is technologically superior but fragile, while China’s leverage is immediate but limited. The USA must find different sources of germanium and gallium, or figure out substitutions (such as inulin and silicone), while China must boost domestic chipmakers. How quickly each side addresses its weaknesses will shape the future of global tech dominance. And it won’t happen overnight.

The global race for AI dominance is intensifying, as countries continue to roll out ambitious strategies to shape the future of AI. In the USA, the White House has launched a sweeping initiative through its publication Winning the Race: America’s AI Action Plan, a comprehensive strategy aiming to cement US leadership in AI by promoting open-source innovation and streamlining regulatory frameworks. This ‘open-source gambit’ is a marked shift in US digital policy, seeking to democratise AI development to stay ahead of global competitors, particularly China.

This aggressive policy direction has found backing from major tech companies, which have endorsed President Trump’s AI deregulation plans despite growing public concern over societal risks. Notably, the plan emphasises ‘anti-woke’ AI frameworks in government contracts, sparking debates about the ideological neutrality and ethical implications of AI technologies in public administration.

Across Europe, nations are accelerating their AI initiatives. Germany is planning an AI offensive to catch up on critical technologies, while the UK is aiming for a £1 trillion tech sector with AI and quantum technology growth

Asian nations are increasingly positioning AI at the centre of their economic and technological strategies. South Korea is prioritising AI-driven growth through major infrastructure and budget investments. The initiative includes the creation of an ‘AI expressway’, starting with the Ulsan AI data centre, underpinned by bold tax incentives and regulatory reforms to attract private sector investment. Complementing this is a proposed investment of 100 trillion KRW (71 billion USD) to accelerate AI innovation, next-generation semiconductors, and the development of AI infrastructure and innovation zones.

Across Africa, governments and partners are turning to AI as a catalyst for growth and governance reform, with national strategies and international investments converging to shape the continent’s digital future. Zimbabwe plans to launch a national AI policy to accelerate the adoption of the technology. Nigeria is preparing a national framework to guide responsible use of AI in governance, healthcare, education and agriculture. Japan has pledged $5.5 billion in loans and announced an ambitious AI training programme to deepen economic ties with Africa.

Latin America, by contrast, continues to struggle to join the global AI race. According to a July 2025 study by the UN Economic Commission for Latin America and the Caribbean, Latin America is lagging behind most advanced economies in terms of AI spending. The region’s spending reached US$2.6 billion in 2023, representing only 1.56% of global AI spending, while the region’s economy represents nearly 6.3% of global GDP. The study urges Latin America to accelerate AI adoption, especially among SMEs, by boosting skilled labour through education and training, promoting sector-specific use cases, and establishing technology centres. Without these measures, the region risks underusing AI’s potential despite its significant economic weight.

Amidst this competitive landscape, there are also moves toward international cooperation. China’s Global AI Governance Action Plan, published just days after America’s AI Action Plan, calls for an inclusive AI governance model with multistakeholder participation. China proposed the establishment of an international AI cooperation organisation, hoping to ‘assist countries in the Global South to strengthen their capacity-building, nurture an AI innovation ecosystem, ensure that developing countries benefit equally from waves of AI, and promote the implementation of the UN 2030 Agenda for Sustainable Development.’ This idea recently received support from Kazakhstan

It is currently unclear how this newly international AI cooperation organisation would interplay with the UN Independent International Scientific Panel on AI and a Global Dialogue on AI Governance, which China has expressed support for, and whose operational details were set out at the end of the summer. The creation of these mechanisms was formally agreed by UN member states in September 2024, as part of the GDC. In August, the UNGA resolution A/RES/79/325 set out their terms of reference and modalities.

 Book, Comics, Publication, People, Person, Face, Head

The 40-member Scientific Panel has the main task of ‘issuing evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue. The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. 

The Global Dialogue on AI Governance, to involve governments and all relevant stakeholders, will function as a platform ‘to discuss international cooperation, share best practices and lessons learned, and to facilitate open, transparent and inclusive discussions on AI governance with a view to enabling AI to contribute to the implementation of the Sustainable Development Goals and to closing the digital divides between and within countries’.

Another GDC commitment was a Global Fund for AI to scale up AI capacity development for sustainable development. The UN Secretary-General’s report on Innovative Voluntary Financing Options for AI Capacity Building (A/79/966), published this July, proposes a Global Fund for AI with an initial target of US $1–3 billion. It would help countries advance in AI readiness, focusing on foundations (compute, data, skills) and enablers (national strategies, cooperation). Funding would rely on voluntary government, philanthropic, private sector, and development bank contributions, with governance through a steering committee, technical panels, and multistakeholder input. Options for capitalisation include a small levy on tech transactions, digital asset contributions, and co-financing with banks, alongside tools such as AI bonds, conditional debt forgiveness, and blended financing. A coordination platform is also proposed to align funders, improve strategic coordination, and standardise monitoring. The report will be considered by the UNGA.

Whether the coming years bring fragmentation into rival technological spheres or a fragile framework for cooperation will depend on how states reconcile national ambitions with global responsibilities. The outcome of this delicate balance may determine not only who leads in AI, but how humanity as a whole lives with it.

As students return to classrooms and diplomats to negotiation tables, the question looms: where is AI really heading?

This summer marked a turning point. The dominant AI narrative, bigger is better, collapsed under its own weight. That story ended this August, with the much-hyped launch of GPT-5.0. Bigger models are not necessarily smarter models, and exponential progress cannot be sustained by brute force alone. 

This autumn, then, can be a season of clarity. In the following analysis, we outline ten lessons from the summer of AI disillusionment, developments that will shape the next phase of the AI story. 

1. Hardware: More is not necessarily better; small AI matters. Nvidia’s rise epitomised the belief that more compute ensures AI progress, but GPT-5 and new studies show diminishing returns. Core model flaws persist, prompting a shift from mega-systems toward diversified, smaller-scale hardware tailored to specific applications.

2. Software: The open-source gambit. Open-source AI surged in 2025, led by China’s DeepSeek and mirrored in the US strategy, challenging the dominance of closed labs. With strong performance at lower cost, open models spread rapidly, reframing debates on safety and shifting power dynamics. Open code became both a tool for innovation and a form of geopolitical soft power.

3. Data: Hitting the limit and turning to knowledge
AI is running out of high-quality training data, pushing a shift from raw text to structured human knowledge. Companies now court experts, adopt retrieval-augmented systems, and build knowledge graphs to ground outputs. This raises governance questions over ownership and fairness, as the risk grows that collective knowledge could be enclosed by a few corporations.

4. Economy: Between commodity and bubble
AI is both a cheap commodity and a speculative bubble. Open models and efficient tools democratise access, while massive investment inflates valuations and risks a crash. The challenge is distinguishing hype from real value: supporting sustainable applications while avoiding the fallout of an overheated market.

5. Risks: From existential to existing
The debate has shifted from distant existential threats to tangible present-day harms—bias, job loss, misinformation, and accountability. Overhyped AGI timelines have lost credibility, while regulators and civil society increasingly push to address AI as a product subject to current laws. Tackling today’s risks builds trust and stability for AI’s future.

6. Education: The front line of disruption
AI has upended traditional teaching by automating essay writing and assessments, creating both crisis and opportunity. Schools must shift from banning AI to rethinking pedagogy—focusing on critical thinking, creativity, and human judgment—while using AI to personalise learning and offload routine tasks. Education reform will determine whether students become AI-empowered or AI-dependent.

7. Philosophy: From ethics towards epistemology
Debates are moving beyond checklists of “AI ethics” toward deeper questions of knowledge and truth. As AI-generated content shapes cognition, concerns focus on how we know, who defines truth, and what reliance on algorithms does to human agency. This epistemological turn reframes AI not just as a tool but as a force reshaping understanding itself.

8. Politics and regulation: Techno-geopolitical realism
The USA, China, and EU now treat AI as strategic infrastructure, tying it to economic security and global power. Washington prioritises dominance and supply chain control; Beijing accelerates national integration and champions open-source abroad; Brussels pushes sovereignty through investment and regulation. Lofty AGI fears have given way to pragmatic competition, with cooperation at risk but realism rising.

9. Diplomacy: The UN moves slowly but surely
The UN has emerged as a steady player in AI governance, adopting resolutions that stress capacity-building, funding, and inclusive cooperation. Proposals include a Global Fund for AI, an international scientific panel, and a Global Dialogue. Though success depends on political will and financing, the UN is carving a role as a legitimate, development-focused convener.

10. Narrative collapse: From hype to realism
The AI hype cycle is deflating, exposing overblown promises and forcing a reset. Long-term doom predictions and inflated valuations are giving way to sober focus on practical applications, human knowledge, and local empowerment. This narrative shift—if matched with transparency and tech literacy—could mark the start of a more grounded, human-centred AI era.

This summary is adapted from Dr Jovan Kurbalija’s article ‘From summer disillusionment to autumn clarity: Ten lessons for AI.’ Read the full article.

This summer has seen a surge of cyberattacks linked to state-backed groups, underscoring how digital intrusions have become a central feature of geopolitical rivalry.

Microsoft has again become the focal point of high-stakes cyber operations. A flaw in its SharePoint software has triggered a wave of attacks that spread rapidly from targeted espionage into broader exploitation. Google and Microsoft confirmed that Chinese-linked groups were among the first movers, but soon both cybercriminals and other state-sponsored actors joined in. More than 400 organisations have reportedly been compromised, making the incident one of the most far-reaching Microsoft-linked breaches since the Exchange server attacks in 2021. The sheer scale of the breach—the compromise of millions of personal data records—demonstrated the blurring of lines between espionage, mass surveillance, and strategic influence operations.

A new joint cybersecurity advisory (CSA) was released on 27 August by over a dozen international law enforcement organisations, exploring the inner workings of Chinese APT threats

The episode has further sharpened tensions between Washington and Beijing. While the USA accused China of orchestrating intrusions through Salt Typhoon group—an operation that siphoned off data from millions of Americans—Beijing countered with claims that the USA itself had weaponised a Microsoft server vulnerability for offensive operations. In parallel, Microsoft announced restrictions on Chinese access to its cyber early warning system, signalling a deliberate shift in how it manages security cooperation with China.

In Asia, Chinese-linked groups infiltrated telecom networks across Southeast Asia and also targeted Singapore’s critical infrastructure, prompting a government investigation.

Russian-linked operations remain among the most disruptive, blending espionage, sabotage, and hybrid tactics. In the USA, federal courts confirmed that their systems were targeted by a cyberattack, with reports suggesting Moscow was responsible. The FBI separately warned that Russian groups continue to probe critical infrastructure by targeting networking devices associated with critical infrastructure IT systems.

In Europe, Russia is suspected of orchestrating sabotage and hybrid pressure campaigns. Norway’s intelligence chief attributed the sabotage of a dam in April to Russian hackers, while Brussels reported GPS jamming that disrupted European Commission President Ursula von der Leyen’s flight, also linked to Moscow. These incidents suggest that the increasing willingness of threat actors to deploy cyber and electronic warfare not only against military targets but also against civilian infrastructure and political figures. Italy faced its own test when suspected Indian state-backed hackers targeted defence firms, suggesting that middle powers are increasingly entering the state-backed cyber arena. 

Nowhere is the fusion of cyber and kinetic conflict clearer than in Ukraine. By meticulously gathering and analysing digital data from the conflict within its borders, Ukraine has provided invaluable insights to its allies. This trove of information demonstrates how digital forensics can not only aid in defence but also strengthen international partnerships and understanding in a complex world.

In a fraught environment such as this, there is a continuous effort to manage risk, protect systems, and navigate the intricate diplomatic realities of the digital age. The irony is that as these attacks unfolded, so did the negotiations at the UN Open-Ended Working Group (OEWG) on cybersecurity, which culminated in the successful adoption of the group’s Final report. The OEWG plays a central role in making cyber rules, providing a forum where states negotiate norms, principles, and rules of responsible behaviour in cyberspace. Yet what good are rules if not implemented? The OEWG has historically struggled to translate non-binding norms into practice: one such rule from 2015 prohibits allowing criminal activity to operate from national territory, but the cyberattacks this summer—and, let’s be frank, since 2015—prove otherwise. Yet, the Global Mechanism, which was agreed upon in the Final report, could bring a change. States will have the opportunity to draft and ultimately adopt action-oriented recommendations—let’s see how they will use it in the future.

The OEWG on ICT security has adopted its Final Report after intense negotiations on responsible state behaviour in cyberspace. As always, compromises among diverse national interests – especially the major powers – mean a watered-down text. While no revolutionary progress has been made, there’s still plenty to highlight. 

 Book, Comics, Publication, Adult, Male, Man, Person, Face, Head, François Walthéry

States recognised the international security risks posed by ransomware, cybercrime, AI, quantum tech, and cryptocurrencies. The document supports concepts like security-by-design and quantum cryptography, but doesn’t contain concrete measures. Commercial cyber intrusion tools (spyware) were flagged as threats to peace, though proposals for oversight were dropped. International law remains the only limit on tech use, mainly in conflict contexts. Critical infrastructure (CI), including fibre networks and satellites, was a focus, with cyberattacks on CI recognised as threats.

The central debate on norms focused on whether the final report should prioritise implementing existing voluntary norms or developing new ones. Western and like-minded states emphasised implementation and called for deferring decisions on new norms to the future permanent mechanism, while several developing countries supported this focus but highlighted capacity constraints. In contrast, another group of countries argued for continued work on new norms. Some delegations, such as sought a middle ground by supporting implementation while leaving space for future norm development. At the same time, the proposed Voluntary Checklist of Practical Actions received broad support. As a result, the Final Report softened language on additional norms, while the checklist was retained for continued discussion rather than adoption.

The states agreed to continue discussions on how international law applies to the states’ use of ICT in the future Global Mechanism, confirming that international law and particularly the UN Charter apply in cyberspace. The states also saw great value in exchanging national positions on the applicability of international law and called for increased capacity building efforts in this area to allow for meaningful participation of all states.

The agreement to establish a dedicated thematic group on capacity building stands out as a meaningful step, providing formal recognition of CB as a core pillar. Yet, substantive elements, particularly related to funding, were left unresolved. The UN-run Global ICT Security Cooperation and Capacity-Building Portal (GSCCP) will proceed through a modular, step-by-step development model, and roundtables will continue to promote coordination and information exchange. However, proposals for a UN Voluntary Fund and a fellowship program were deferred.

Prioritising the implementation of existing CBMs rather than adopting new ones crystallised during this last round of negotiation, despite some states’ push for additional commitments such as equitable ICT market access and standardised templates. Proposals lacking broad support—like Iran’s ICT market access CBM, the Secretariat’s template, and the inclusion of Norm J on vulnerability disclosure—were ultimately excluded or deferred for future consideration. 

States agreed on what the future Global mechanism will look like and how non-governmental stakeholders will participate in the mechanism. The Global mechanism will hold substantive plenary sessions once a year during each biennial cycle, work in two dedicated thematic groups (one on specific challenges, one on capacity building) that will allow for more in-depth discussions to build on the plenary’s work, and hold a review conference every five years. Relevant non-governmental organisations with ECOSOC status can be accredited to participate in the substantive plenary sessions and review conferences of the Global Mechanism, while other stakeholders would have to undergo an accreditation on a non-objection basis.

WSIS+20 High-Level Event 2025

This summer, Geneva became the stage for a most significant global digital gathering. From 7 to 11 July 2025, the city hosted the WSIS+20 High-Level Event, held alongside the AI for Good Global Summit. 

The week-long deliberations were framed as part of preparations for the UN General Assembly’s WSIS+20 Review, scheduled for 16–17 December 2025. That review will reaffirm international commitment to the WSIS process and set strategic direction for the next two decades of digital cooperation. 

 Person, Adult, Male, Man, Book, Comics, Publication, Face, Head

The Chair’s Summary, issued by South Africa’s Minister of Communications Solly Malatsi, underscored WSIS’s role as a cornerstone of global digital cooperation. Over the past twenty years, the WSIS architecture — anchored in the Geneva Plan of Action and the Tunis Agenda — has expanded connectivity, empowered users, and guided national and international strategies to bridge digital divides. Today, more than 5.5 billion people (68% of the world’s population) are online, up from fewer than one billion in 2005. Yet 2.6 billion people remain unconnected, concentrated in developing countries, least developed countries, and marginalised communities, making universal connectivity the most urgent unfinished task.

Some discussions in Geneva revolved around how to adapt the implementation of WSIS Action Lines to new realities: the rise of AI, quantum, and space technologies; persistent digital divides; and the implementation of the GDC. Participants largely agreed that existing mechanisms — the WSIS Forum, the Internet Governance Forum (IGF), and initiatives such as AI for Good — are indispensable, and ideally positioned to implement the GDC and translate its principles into measurable action.

Several themes stood out. First, the need to ensure that digital governance keeps pace with unpredictable technological progress, while safeguarding human rights, cultural and linguistic diversity, and local realities. Second, the importance of youth engagement: more than 280 young people participated in a dedicated Youth Track, proposing co-leadership roles, grassroots funds, and a permanent WSIS Youth Programme. Third, recognition that inclusion must go beyond connectivity to encompass affordability, digital skills, and rights-based participation.

Participants also emphasised that the WSIS process must continue to link digital innovation with sustainability goals, integrating green technology and climate-smart solutions. Ethical and rights-based approaches to AI and other emerging technologies were highlighted as essential, alongside stronger international cooperation to address cybersecurity threats, disinformation, and online harms.

The Chair’s Summary concluded with a clear message: WSIS will remain the central platform for advancing digital cooperation beyond 2025, ensuring that the gains of the past two decades are consolidated while adapting to new realities. True inclusion, it stressed, is not only about being present but about being heard — there is a need to engage those still excluded, reflect diverse local and global experiences, and continue advancing WSIS’s vision of an equitable, people-centred information society over the next 20 years.

Our session reports and AI insights from both events can be found on the dedicated WSIS+20 High-Level Event 2025 and AI for Good Global Summit 2025 web pages on the Digital Watch Observatory.

Zero Draft of WSIS+20 outcome document

In the lead-up to the UN General Assembly’s high-level meeting dedicated to the WSIS+20 Review, scheduled for 16–17 December 2025, negotiations and consultations are focused on concrete text for what will become a WSIS+20 outcome document. This concrete text – called the zero draft – was released on 30 August.

Digital divides and inclusion take centre stage in the zero draft. While connectivity has expanded – 95% of the global population is now within reach of broadband, and internet use has grown from 15% in 2005 to 67% in 2025 – significant gaps remain. Disparities persist across countries, urban and rural areas, genders, persons with disabilities, older populations, and minority language speakers. The draft calls for affordable entry-level broadband, local multilingual content, digital literacy, and mechanisms to connect the unconnected, ensuring equitable access.

The digital economy continues to transform trade, finance, and industry, creating opportunities for small and women-led businesses but also risks deepening inequalities through concentrated technological power and automation. Against this backdrop, the draft outlines a commitment to supporting the development of digital financial services, and a call for stakeholders to foster ‘open, fair, inclusive and non-discriminatory digital environments.

Environmental sustainability is a key consideration, as ICTs facilitate monitoring of climate change and resource management, yet their growth contributes to energy demand, emissions, and electronic waste. Standing out in the draft is a call for the development of global reporting standards on environmental impacts, and of global standards for sustainable product design, and circular economy practices to align digital innovation with environmental goals.

The Zero Draft reaffirms human rights, confidence and security, and multistakeholder internet governance as central pillars of the digital ecosystem. Human rights are positioned as the foundation of digital cooperation, with commitments to protect freedom of expression, privacy, access to information, and the rights of women, children, and other vulnerable groups. Strengthening confidence and security in the use of technology is seen as essential for innovation and sustainable development, with emphasis on protecting users from threats such as online abuse and violence, hate speech, and misinformation, while ensuring safeguards for privacy and freedom of expression.

The draft outlines a series of key (desirable) attributes for the internet – open, free, global, interoperable, reliable, secure, stable – and highlights the need for more inclusive internet governance discussions, across stakeholder groups (governments, the private sector, civil society, academia, and technical communities) and across developed and developing countries alike. 

To advance capacity building in relation to AI, the draft proposes a UN AI research programme and AI capacity building fellowship, both with a focus on developing countries. In parallel, the draft welcomes ongoing initiatives such as the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance.

Recognising the critical importance of global cooperation in internet governance, the draft designates the Internet Governance Forum (IGF) as a permanent UN body and calls for enhanced secretariat support, enhanced working methods, and reporting on outcomes to UN entities and processes (which are then called to duly take these outputs into account in their work). The long-discussed issue of IGF financial sustainability is addressed in the form of a request for the UN Secretary-General to make proposals on future funding. 

Finally, the draft looks at the interplay between WSIS, the Global Digital Compact and the 2030 Agenda for Sustainable Development, and outlines several mechanisms for better connecting them and avoiding duplication and overlaps. These include a joint WSIS-GDC implementation roadmap, the inclusion of GDC review and follow-up into existing annual WSIS mechanisms (at the level of the Commission on Science and Technology for Development and the Economic and Social Council), and reviews in GDC-WSIS alignments at the GA level. Speaking of overall reviews, the draft also envisions a combined review of Agenda 2030 and of outcomes of the WSIS-GDC joint implementation roadmap in 2030, as well as a WSIS+30 review in 2035.

Looking ahead

The Zero Draft sets the stage for intense negotiations ahead of the December 2025 High-Level Meeting. Member states and other stakeholders are invited to submit comments until 26 September. It then remains to be seen what a second version of the outcome document will look like, and which elements are kept, revised, or removed.

Follow the process with us on our dedicated WSIS+20 web page, where we will track key developments, highlight emerging debates, and provide expert analysis as the negotiations unfold.

Weekly #228 Roadmap to the digital future: WSIS+20 zero draft paves the way

 Logo, Text

29 August – 5 September 2025


 Person, Adult, Male, Man, Book, Comics, Publication, Face, Head

Dear readers,

In 2003-2005, a landmark UN summit – the World Summit on the Information Society (WSIS) – outlined a vision for an inclusive information society, set out recommendations for making this vision a reality, and laid the basis for much of what we call today the global digital governance architecture. Twenty years later, UN member states are looking at progress made in achieving the goals set back then and areas requiring further effort, as well as at whether the WSIS architecture needs updates. This unfolding WSIS+20 review process will end in December 2025 with a high-level meeting of the UN General Assembly. In the lead-up to the meeting, negotiations and consultations are now focused on concrete text for what will become a WSIS+20 outcome document. This concrete text – called the zero draft – was released last week.

Digital divides and inclusion take centre stage in the zero draft. While connectivity has expanded – 95% of the global population is now within reach of broadband, and internet use has grown from 15% in 2005 to 67% in 2025 – significant gaps remain. Disparities persist across countries, urban and rural areas, genders, persons with disabilities, older populations, and minority language speakers. The draft calls for affordable entry-level broadband, local multilingual content, digital literacy, and mechanisms to connect the unconnected, ensuring equitable access.

The digital economy continues to transform trade, finance, and industry, creating opportunities for small and women-led businesses but also risks deepening inequalities through concentrated technological power and automation. Against this backdrop, the draft outlines a commitment to supporting the development of digital financial services, and a call for stakeholders to foster ‘open, fair, inclusive and non-discriminatory digital environments.

Environmental sustainability is a key consideration, as ICTs facilitate monitoring of climate change and resource management, yet their growth contributes to energy demand, emissions, and electronic waste. Standing out in the draft is a call for the development of global reporting standards on environmental impacts, and of global standards for sustainable product design, and circular economy practices to align digital innovation with environmental goals.

The Zero Draft reaffirms human rights, confidence and security, and multistakeholder internet governance as central pillars of the digital ecosystem. Human rights are positioned as the foundation of digital cooperation, with commitments to protect freedom of expression, privacy, access to information, and the rights of women, children, and other vulnerable groups. Strengthening confidence and security in the use of technology is seen as essential for innovation and sustainable development, with emphasis on protecting users from threats such as online abuse and violence, hate speech, and misinformation, while ensuring safeguards for privacy and freedom of expression.

The draft outlines a series of key (desirable) attributes for the internet – open, free, global, interoperable, reliable, secure, stable – and highlights the need for more inclusive internet governance discussions, across stakeholder groups (governments, the private sector, civil society, academia, and technical communities) and across developed and developing countries alike. 

To advance capacity building in relation to AI, the draft proposes a UN AI research programme and AI capacity building fellowship, both with a focus on developing countries. In parallel, the draft welcomes ongoing initiatives such as the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance.

Recognising the critical importance of global cooperation in internet governance, the draft designates the Internet Governance Forum (IGF) as a permanent UN body and calls for enhanced secretariat support, enhanced working methods, and reporting on outcomes to UN entities and processes (which are then called to duly take these outputs into account in their work). The long-discussed issue of IGF financial sustainability is addressed in the form of a request for the UN Secretary-General to make proposals on future funding. 

Finally, the draft looks at the interplay between WSIS, the Global Digital Compact and the 2030 Agenda for Sustainable Development, and outlines several mechanisms for better connecting them and avoiding duplication and overlaps. These include a joint WSIS-GDC implementation roadmap, the inclusion of GDC review and follow-up into existing annual WSIS mechanisms (at the level of the Commission on Science and Technology for Development and the Economic and Social Council), and reviews in GDC-WSIS alignments at the GA level Speaking of overall reviews, the draft also envisions a combined review of Agenda 2030 and of outcomes of the WSIS-GDC joint implementation roadmap in 2030, as well as a WSIS+30 review in 2035.

Looking ahead

The Zero Draft sets the stage for intense negotiations ahead of the December 2025 High-Level Meeting. Member states and other stakeholders are invited to submit comments until 26 September. It then remains to be seen what a second version of the outcome document will look like, and which elements are kept, revised, or removed.👉 Follow the process with us on our dedicated web page, where we will track key developments, highlight emerging debates, and provide expert analysis as the negotiations unfold.

DW Team


Highlights from the week of 29 August – 5 September 2025
switzerland parliament

The model is designed to boost innovation while remaining fully transparent and accessible to all.

TSMC Semiconductors AI A14 logic process

The USA revoked TSMC’s licence to ship advanced technology to China, adding pressure to global semiconductor supply chains.

Google AI Mode Search Labs restaurant reservations US EU

The ruling bars Google from exclusive distribution deals for products like Search, Chrome and Gemini.

court hammer with eu flag

The EU General Court upheld the EU–US Data Privacy Framework, rejecting claims it lacks adequate safeguards and independence in oversight of US data practices involving personal data from the EU.

France and Germany flags

The document flagship outlines projects in AI, quantum, cloud, and space, promotes a Franco-German digital ecosystem for public services, and sets the stage for the 2025 European Digital Sovereignty Summit.

cyberespionage

A joint cybersecurity advisory details how Salt Typhoon exploited unpatched network-edge devices to infiltrate telecommunications, military and government systems across 13 countries.

Gemini Generated Image lb3l2llb3l2llb3l

The SCO Tianjin Declaration emphasised cyber sovereignty, inclusive AI development, global cybersecurity norms, and stronger cooperation in the digital economy.

Flags green BG info set4 07

A global report found 63% of employers say AI has significantly boosted productivity at work.


READING CORNER
BLOG featured image 2025 From summer disillusionment to autumn clarity

As classrooms and negotiation tables fill again, a pressing question lingers: where is AI headed? This summer marked a turning point, as the ‘bigger is better’ narrative faltered. This blog captures ten key lessons from a season of AI disillusionment.

ai apprent fin

Why apprenticeship and storytelling are the future of learning in the AI Era AI is forcing us to ask a deeper question: what is the real purpose of learning?

UPCOMING EVENTS
Diplo EVENT 2025 African Priorities for the Global Digital

The webinar will bring together African experts from technology, development, diplomacy and policy domains to discuss which digital issues must be urgently prioritised to keep Africa on course in a rapidly changing world.

UN Cyber Dialogue 2025 web banners 678x248px 1

In this one-hour session, several experts – Asoke Mukerji, Isaac Morales Tenorio, and Fan Yang will debate the future of global cyber negotiations — tackling obstacles, testing new ideas, and asking whether the UN dialogue can move from compromise to real progress.

UNHRC 2

This session provides a key platform for the international community to discuss, promote, and protect human rights worldwide.

Diplo Decoding the UN CSTD

The event will discuss the progress made by the Multi-Stakeholder Working Group on Data Governance and the expectations for the next meeting, which will take place on 15-16 September.

Weekly #227 – UNGA adopts new AI resolution, Trump threatens tariffs over EU digital taxes, OpenAI updates ChatGPT safety after teen suicide

 Logo, Text

22 – 29 August 2025


 Book, Comics, Publication, People, Person, Face, Head

Dear readers,

On 26 August 2025, the UN General Assembly (UNGA) adopted a resolution establishing two new mechanisms for global AI governance: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. The 40-member Panel will provide annual, evidence-based assessments of AI’s opportunities, risks, and impacts, while the Global Dialogue will serve as a platform for governments and relevant stakeholders to discuss international cooperation, exchange best practices, and foster inclusive discussions on AI governance.

The Dialogue will be launched during UNGA’s 80th session in September 2025 and will convene annually, alternating between Geneva and New York, alongside existing UN events. These mechanisms also aim to contribute to capacity development efforts on AI. The resolution also invites states and stakeholders to contribute resources, particularly to ensure participation from developing countries, and foresees that a review of both initiatives may happen at UNGA’s 82nd session.

Other highlights of the week:

US President Donald Trump has warned that he may impose retaliatory tariffs on countries introducing digital taxes or regulations targeting American tech giants, a move seen as a direct warning to the EU. Several European states and the EU itself have rolled out measures such as the Digital Services Act, the Digital Markets Act, and digital services taxes to regulate big platforms and ensure companies like Google, Apple, Amazon, and Meta pay fair taxes locally. Trump’s threat also puts renewed pressure on the UK, which continues to uphold its digital services tax despite a trade deal with Washington. Besides that, the US Federal Trade Commission (FTC) warned tech companies that complying with the EU and UK online content and encryption rules could breach US law under Section 5 of the FTC Act.

Alphabet’s Google has announced a $9 billion investment in Virginia by 2026, reinforcing the state’s status as a key US data infrastructure hub, with plans for a new Chesterfield County facility and expansions in Loudoun and Prince William counties to boost AI and cloud computing capabilities. The investment, supported by Dominion Energy and expected to take up to seven years to operationalise fully, aligns with a broader tech trend where giants like Microsoft, Amazon, Meta, and Alphabet are pouring hundreds of billions into AI projects, though it raises energy demand concerns that Google aims to address through efficiency measures and community funding.

INTERPOL’s ‘Serengeti 2.0’ operation across Africa led to over 1,200 arrests between June and August 2025, targeting ransomware, online fraud, and business email compromise schemes, and recovering nearly USD 100 million stolen from tens of thousands of victims. Authorities shut down illicit cryptocurrency mining sites in Angola, dismantled a massive crypto fraud scheme in Zambia, and uncovered a human trafficking network with forged passports in Lusaka.

OpenAI announced new safety measures for ChatGPT after a lawsuit accused the chatbot of contributing to a teenager’s suicide. The company plans to enhance detection of mental distress, improve safeguards in suicide-related conversations, add parental controls, and provide links to emergency services while addressing content filtering flaws. Regulators and mental health experts are intensifying scrutiny, warning that growing reliance on chatbots instead of professional care could endanger vulnerable users, especially children.

The battle of the giants: Elon Musk’s xAI has sued Apple and OpenAI in Texas, accusing them of colluding to monopolise the AI market through Apple’s exclusive 2024 deal to integrate ChatGPT into its devices, which allegedly disadvantaged Musk’s X and Grok apps. Musk, seeking billions in damages and a jury trial, argues the partnership stifles competition and reflects Apple’s antitrust violations.

For the main updates, reflections and events, consult the RADAR, the READING CORNER and the UPCOMING EVENTS section below.

Join us as we connect the dots, from daily updates to main weekly developments, to bring you a clear, engaging monthly snapshot of worldwide digital trends.

DW Team


RADAR

Highlights from the week of 22 – 29 August 2025

China Salt Typhoon hackers data breach telecom companies ATT Verizon FBI

US officials link Beijing-backed Salt Typhoon spies to breaches at major telcos and government networks.

south korea eu flags

AI’s rapid rise is reshaping how nations think about energy, opening the door to new partnerships that could redefine the path toward a cleaner and smarter future.

Google Gemini for Home Google Assistant Nest Hub Gemini Live

The launch coincides with federal plans to boost AI while limiting regulation-heavy states.

Android malware MediaPlayer JavaScript Telegram

A new wave of Android malware deployed through fake utilities on the Play Store infected millions, using overlay attacks to harvest financial credentials and deploy adware.

ENISA EU NIS2 Security measures business compliance

The project highlights the EU’s focus on preparedness, with ENISA tasked to oversee the technical and operational standards of the reserve.

YouTube content creators AI age verification minors SpyCloud identity theft

By experimenting with AI edits without approval, YouTube has angered creators and renewed debates about trust, regulation and control in the age of AI.

Emily Portman Spotify iTunes AI music fraudsters artists

Fake AI-generated albums mimicking folk singer Emily Portman appeared on Spotify, sparking copyright complaints.

google lawsuit discrimination settlement Ana Cantu

Instead of cutting jobs, Google is investing in AI training through its new AI Savvy Google programme to upskill its workforce.

beautiful cryptocurrency hologram design 5

A $2.7 billion whale sell-off triggered liquidations, weakening Bitcoin near key supports while Ethereum maintains stronger technical metrics and positive momentum.

bluesky screenshot cover

Age verification law could reshape online access and entrench big tech dominance.

Gmail Google Cloud data leak ShinyHunters phishing attack

ShinyHunters breached Google systems, sparking new phishing threats against Gmail accounts.

fantasy characters experiencing love

Humanlike AI may distort reality as people form emotional attachments, experts caution.

united states cybersecurity cyberattacks safety

Salt Typhoon, observed since 2019, has been linked to targeting routers, VPNs and edge devices, with more than 200 US companies reportedly impacted.

ChatGPT OpenAI Sam Altman parental controls mental health monitoring

A teenager’s death has sparked calls for stronger safeguards on ChatGPT and similar AI systems.


READING CORNER
BLOG featured image 2025 98

Wheels, wagons, and metal turned herders into mobile nomads. With speed on their side, raiding – and empire-building – became possible. Aldo Matteucci writes.

ai green

AI is emerging as both a driver of environmental strain and a potential force for sustainable solutions, raising urgent questions about whether innovation and ecological responsibility can truly advance together.

Irans nuclear saga

Despite US and Israeli strikes, Iran’s nuclear program remains alive, exposing the double standards of global nuclear politics.

BLOG featured image 2025 101

This blog discusses how different cultural and philosophical traditions can be used as a strong foundation for global AI governance discussions.

UPCOMING EVENTS
diplo event 1 zelena

ISOC Brazil webinar on the responsibility of intermediaries and changes in the US policy landscape. The webinar will promote an in-depth discussion about the

diplo event crvena 2

Declaring Independence in Cyberspace: Book Discussion Diplo’s Director of Digital Trade and Economic Security, Marilia Maciel, will provide comments and