weekly newsletter

Home | Newsletters & Shorts | Weekly #240 Code meets climate: AI and digital at COP30

Weekly #240 Code meets climate: AI and digital at COP30

 Logo, Text

21-28 November 2025


HIGHLIGHT OF THE WEEK

Code meets climate: AI and digital at COP30

COP30, the 30th annual UN climate meeting, officially wrapped up last Friday, 21 November. As the dust settles in Belém, we take a closer look at the outcomes with implications for digital technologies and AI.

In agriculture, momentum is clearly building. Brazil and the UAE unveiled AgriLLM, the first open-source large language model designed specifically for agriculture, developed with support from international research and innovation partners. The goal is to give governments and local organisations a shared digital foundation to build tools that deliver timely, locally relevant advice to farmers. Alongside this, the AIM for Scale initiative aims to provide digital advisory services, including climate forecasts and crop insights, to 100 million farmers.

 Person, Book, Comics, Publication, Cleaning, Clothing, Hat, Outdoors

Cities and infrastructure are also stepping deeper into digital transformation. Through the Infrastructure Resilience Development Fund, insurers, development banks, and private investors are pooling capital to finance climate-resilient infrastructure in emerging economies — from clean energy and water systems to the digital networks needed to keep communities connected and protected during climate shocks. 

The most explicit digital agenda surfaced under the axis of enablers and accelerators. Brazil and its partners launched the world’s first Digital Infrastructure for Climate Action, a global initiative to help countries adopt open digital public goods in areas such as disaster response, water management, and climate-resilient agriculture. The accompanying innovation challenge is already backing new solutions designed to scale.

The Green Digital Action Hub was also launched and will help countries measure and reduce the environmental footprint of technology, while expanding access to tools that use technology for sustainability. 

Training and capacity building received attention through the new AI Climate Institute, which will help the Global South develop and deploy AI applications suited to local needs — particularly lightweight, energy-efficient models.

The Nature’s Intelligence Studio, grounded in the Amazon, will support nature-inspired innovation and introduce open AI tools that help match real-world sustainability challenges with bio-based solutions.

Finally, COP30 marked a first by placing information integrity firmly on the climate action agenda. With mis- and disinformation recognised as a top global risk, governments and partners launched a declaration and new multistakeholder process aimed at strengthening transparency, shared accountability, and public trust in climate information — including the digital platforms that shape it.

The big picture. Across all strands, COP30 sent a clear message: the digital layer of climate action is not optional — it is embedded in the core delivery of climate action.

IN OTHER NEWS LAST WEEK

This week in AI governance

The USA. The Genesis Mission was formally established by Executive Order on 24 November 2025, tasking the US Department of Energy (DOE) with leading a nationwide AI-driven scientific research effort. The Mission will build a unified ‘American Science and Security Platform,’ combining the DOE’s 17 national laboratories’ supercomputers, federal scientific datasets that have accumulated over decades, and secure high-performance computing capacity — creating what the administration describes as ‘the world’s most complex and powerful scientific instrument ever built.’

Under the plan, AI will generate ‘scientific foundation models’ and AI agents capable of automating experiment design, running simulations, testing hypotheses and accelerating discoveries in key strategic fields: biotechnology, advanced materials, critical minerals, quantum information science, nuclear fission and fusion, space exploration, semiconductors and microelectronics. 

The initiative is framed as central to energy security, technological leadership and national competitiveness — the administration argues that despite decades of rising research funding, scientific output per dollar has stagnated, and AI can radically boost research productivity within a decade. 

To deliver on these ambitions, the Executive Order sets a governance structure: the DOE Secretary oversees implementation; the Assistant to the President for Science and Technology will coordinate across agencies; and DOE may partner with private sector firms, academia and other stakeholders to integrate data, compute, and infrastructure.

The UK. The UK government has launched a major AI initiative to drive national AI growth, combining infrastructure investment, business support, and research funding. An immediate £150 million GPU deployment in Northamptonshire kicks off a £18 billion programme over five years to build sovereign AI capacity. Through an advanced-market commitment of £100 million, the state will act as a first customer for domestic AI hardware startups, helping de-risk innovation and boost competitiveness.

The plan includes AI Growth Zones, with a flagship site in South Wales expected to create over 5,000 jobs, and expanded access to high-performance computing for universities, startups, and research organisations. A dedicated £137 million “AI for Science” strand will accelerate breakthroughs in drug discovery, clean energy, and advanced materials, ensuring AI drives both economic growth and public value outcomes.

The report highlights Bangladesh’s relative strengths: a growing e-government infrastructure and generally high public trust in digital services. However, it also candidly maps structural challenges: uneven connectivity and unreliable power supply beyond major urban areas, a persistent digital divide (especially gender and urban–rural), limited high-end computing capacity, and insufficient data protection, cybersecurity and AI-related skills in many parts of society. 

As part of its roadmap, the country plans to prioritise governance frameworks, capacity building, and inclusive deployment — especially ensuring that AI supports public-sector services in health, education, justice and social protection. 

Australia. Australia has launched the AI Safety Institute (AISI), a national centre tasked with consolidating AI safety research, coordinating standards development, and advising both government and industry on best practices. The Institute will assess the safety of advanced AI models, promote resilience against misuse or accidents, and serve as a hub for international cooperation on AI governance and research.

The EU. The European Commission has launched an AI whistle-blower tool, providing a secure and confidential channel for individuals across the EU to report suspected breaches of the AI Act, including unsafe or high‑risk AI deployments. The tool allows submissions in any EU official language, supports anonymity, and offers follow-up tracking, aiming to strengthen oversight and enforcement of EU AI regulations. 

With the launch of the tool, the EU aims to close gaps in the enforcement of the EU AI Act, increase the accountability of developers and deployers, and foster a culture of responsible AI usage across member states. The tool is also intended to foster transparency, allowing regulators to react faster to potential violations without relying just on audits or inspections.

United Arab Emirates. The AI for Development Initiative has been announced to advance digital infrastructure across Africa, backed by a US$1 billion commitment from the UAE. According to official statements, the initiative plans to channel resources to sectors such as education, agriculture, climate adaptation, infrastructure and governance, helping African governments to adopt AI-driven solutions even where domestic AI capacity remains limited. 

Though full details remain to be seen (e.g. selection of partner countries, governance and oversight mechanisms), the scale and ambition of the initiative signal the UAE’s aspiration to act not just as an AI adoption hub, but as a regional and global enabler of AI-enabled development.


From Australia to the EU: New measures shield children from online harms

The bans on under‑16s are advancing globally, and Australia has gone the farthest in this effort. Regulators there have now widened the scope of the ban to include platforms like Twitch, which is classified as age-restricted due to its social interaction features. Meta has begun notifying Australian users believed to be under 16 that their Facebook and Instagram accounts will be deactivated starting 4 December, a week before the law officially takes effect on 10 December.

To support families through the transition, the government has established a Parent Advisory Group, bringing together organisations representing diverse households to help carers guide children on online safety, communication, and safe ways to connect digitally.

The ban has already provoked opposition. Less than two weeks before enforcement, two 15‑year-olds, backed by the advocacy group Digital Freedom Project, filed a constitutional challenge in the High Court. They argue the law unfairly limits under‑16s’ ability to participate in public debate and political expression, effectively silencing young voices on issues that affect them directly.

Malaysia also plans to ban social media accounts for people under 16 starting in 2026. The Cabinet approved the measure to protect children from online harms such as cyberbullying, scams, and sexual exploitation. Authorities are considering approaches such as electronic age verification using ID cards or passports, although the exact enforcement date has not been set.

EU lawmakers have proposed similar protections. The European Parliament adopted a non-legislative report calling for a harmonised EU minimum age of 16 for social media, video-sharing platforms, and AI companions, with access for 13–16-year-olds allowed only with parental consent. They support accurate, privacy-preserving age verification via the EU age-verification app and eID wallet, but emphasise that platforms must still design services that are safe by default.

Beyond age restrictions, the EU is strengthening broader safeguards. Member states have agreed on a Council position for a regulation to prevent and combat child sexual abuse online, requiring platforms to block child sexual abuse material (CSAM) and child solicitation, assess risks, and implement mitigation measures, including safer default settings, content controls, and reporting tools. National authorities will oversee compliance and may impose penalties, while high-risk platforms could also contribute to developing technologies to reduce risks. A new EU Centre on Child Sexual Abuse would support enforcement, maintain abuse material databases, and assist victims in removing exploitative images.

The European Parliament’s report also addresses everyday online risks, calling for bans on addictive features—such as infinite scrolling, autoplay, pull-to-refresh, reward loops, engagement-based recommendation algorithms, and gambling-like game elements like loot boxes. It urges action against kidfluencing, commercial exploitation, and generative AI risks, including deepfakes, AI chatbots, and nudity apps producing non-consensual content. Enforcement measures include fines, platform bans, and personal liability for senior managers in cases of serious or persistent breaches.


G20 leaders set digital priorities for a more inclusive global future 

This past weekend, G20 leaders convened in Africa for the first time at their annual Leaders’ Summit. Discussions focused on AI, emerging digital technologies, bridging digital divides, and the role of critical minerals

In their joint declaration, G20 leaders emphasised the transformative potential of AI and emerging digital technologies for sustainable development and reducing inequalities. They stressed the need for international cooperation to ensure AI benefits are equitably shared and that associated risks—including human rights, transparency, accountability, safety, privacy, data protection, and ethical oversight—are carefully managed. The declaration recognised the UN as a central forum for promoting responsible AI governance globally.

The leaders welcomed initiatives launched under South Africa’s presidency, including the Technology Policy Assistance Facility (TPAF) by UNESCO, which supports countries in shaping AI policy through global experiences and research. They also highlighted the AI for Africa Initiative, designed to strengthen the continent’s AI ecosystem by expanding computing capacity, developing talent, creating representative datasets, enhancing infrastructure, and fostering Africa-centric sovereign AI capabilities, supported through long-term partnerships and voluntary contributions.

The declaration reaffirmed G20 commitments to bridging digital divides, including halving the gender digital divide by 2030, promoting universal and meaningful connectivity, and building inclusive, safe, and resilient digital economies. Leaders emphasised the role of digital public infrastructure, modernised education systems, teacher empowerment, and skills development in equipping societies for the digital age. Tourism innovation, enhanced air connectivity, market access, and digital tools for MSMEs were also noted as priorities for sustainable and inclusive economic growth.

The rising global demand for critical minerals driven by sustainable transitions, digitisation, and industrial innovation was highlighted in the declaration. Leaders acknowledged challenges faced by producer countries, including underinvestment, limited value addition, technological gaps, and socio-environmental pressures. They welcomed the G20 Critical Minerals Framework, a voluntary blueprint promoting investment, local beneficiation, governance, and resilient value chains.


Ongoing Nexperia saga: Netherlands’ chip seizure meets China’s legal challenge

Two weeks ago, the Netherlands temporarily suspended its takeover of Nexperia, the Dutch chipmaker owned by China’s Wingtech, following constructive talks with Chinese authorities. 

However, tensions have persisted. Wingtech has challenged the Dutch intervention in court, while Beijing continues to press for a full reversal. Meanwhile, Nexperia’s Dutch management has urged its Chinese units to cooperate to restore disrupted supply chains, which remain fragile after the earlier intervention. Wingtech now accuses the Dutch government of trying to permanently sever its control, leaving the situation unresolved: the company’s ownership and the stability of critical chip flows between Europe and China are still in dispute, with potential knock-on effects for global industries such as the automotive industry.


LAST WEEK IN GENEVA
 machine, Wheel, Spoke, City, Art, Bulldozer, Fun, Drawing

The 14th UN Forum on Business and Human Rights was held from Monday to Wednesday in Geneva and online, under the theme ‘Accelerating action on business and human rights amidst crises and transformations.’ The forum addressed key issues such as safeguarding human rights in the age of AI and exploring human rights and platform work in the Asia-Pacific region amid the ongoing digital shift. Additionally, a side event took a closer look at the labour behind AI.

LOOKING AHEAD
 Person, Face, Head, Binoculars

A reminder: Civil society organisations have until 30 November 2025 (this Sunday) to apply for the CADE Capacity Development Programme 2025–2026. The programme helps CSOs strengthen their role in digital governance through a mix of technical courses, diplomatic skills training, and expert guidance. Participants can specialise in AI, cybersecurity, or infrastructure policy, receive on-demand helpdesk support, and the most engaged will join a study visit to Geneva. Fully funded by the EU, the programme offers full scholarships to selected organisations, with a special welcome to those from the Global South and women-led groups.

The 2025 International AI Standards Summit will be held on 2–3 December in Seoul, jointly organised by the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the International Telecommunication Union (ITU), with hosting support from the Korean Agency for Technology and Standards (KATS). The summit will bring together policymakers, industry leaders, and experts to advance global AI standards, with a focus on interoperability, transparency, and human rights. By fostering international dialogue and cooperation, the event aims to lay the groundwork for responsible AI development and deployment worldwide. The event is by invitation only.

Next Wednesday (3 December), Diplo, UNEP, and Giga are co-organising an event at the Giga Connectivity Centre in Geneva, titled ‘Digital inclusion by design: Leveraging existing infrastructure to leave no one behind’. The event will explore how community anchor institutions—such as post offices, schools, and libraries—can help close digital divides by offering connectivity, digital skills, and access to essential online services. The session will feature the launch of the new UPU Digital Panorama report, showcasing how postal networks are supporting inclusive digital transformation, along with insights from Giga on connecting schools worldwide. Looking ahead to WSIS+20 and the Global Digital Compact, the discussion will consider practical next steps toward meaningful digital inclusion. The event will be held in situ, in Geneva, Switzerland.

Also on Wednesday, (3 December) Diplo will be hosting an online webinar, ‘Gaming and Africa’s youth: Opportunities, challenges, and future pathways’. The session will explore how gaming can support education, mental health, and cross-border business opportunities, while addressing risks such as addiction and regulatory gaps. Participants will discuss policies, investment, and capacity-building strategies to ensure ethical and inclusive growth in Africa’s gaming sector.



READING CORNER
AI and learning blog

Dismissing AI in education is futile. How we can use technology to enhance, rather than replace, genuine learning and critical thinking skills?