UNDP and RIPE NCC join forces for sustainable development

The United Nations Development Programme (UNDP) and the Réseaux IP Européens Network Coordination Centre (RIPE NCC) have signed a new agreement to boost cooperation on digital development. The Memorandum of Understanding, announced in New York during the UN General Assembly’s High-Level Week, focuses on building scalable, secure, and resilient internet infrastructure across the Arab States and beyond.

By combining UNDP’s development mandate with RIPE NCC’s technical expertise, the partnership aims to promote inclusive digital transformation and accelerate progress toward the Sustainable Development Goals.

UNDP’s Abdallah Al Dardari stressed that digital transformation is now a ‘development imperative,’ while RIPE NCC CEO Hans Petter Holen highlighted that resilient internet systems are vital for innovation and growth.

The announcement took place as part of Digital@UNGA Week and came just ahead of UNDP’s High-Level Roundtable on Digital for Sustainable Development. At the roundtable, partners unveiled Morocco’s Digital for Sustainable Development Hub, underscoring the growing role of multi-stakeholder cooperation in shaping inclusive digital ecosystems worldwide.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General launches call for candidates for AI Scientific Panel

The UN Secretary-General has launched an open call for candidates to serve on the Independent International Scientific Panel on Artificial Intelligence.

The Panel was agreed by UN member states in September 2024 as part of the Global Digital Compact; its terms of reference were later defined in a UN General Assembly resolution adopted in August 2025. The 40-member Panel will provide evidence-based scientific assessments on AI’s opportunities, risks, and impacts. Its work will culminate in an annual, policy-relevant – but non-prescriptive –summary report presented to the Global Dialogue on AI Governance, along with up to two updates per year to engage with the General Assembly plenary.

Candidates with expertise in the following fields are invited to apply:

  • AI, including foundation models & generative AI, machine learning methods, core AI subfields (e.g. vision, language, speech/audio, robotics, planning & scheduling, knowledge representation), reliability, safety & alignment, cognitive & neuroscience links, human–AI interaction, AI security and infrastructure;
  • Applied AI, including science (foundational and applied in health, climate, life sciences, physics, health, social sciences, agriculture), engineering, industry and mobility (e.g. materials, drugs, transportation, smart cities, IoT, satellite, navigation), digital society (e.g. misinformation & disinformation, online harms, social networks, software engineering, web),
  • Related fields, including AI opportunity, risk and impact assessment, AI impacts on society, technology, economy, and environment, AI security and infrastructure, data, ethics, and rights, governance (e.g. public policy, international law, standards, oversight, compliance, foresight and scenario-building).

Following the call for nominations (open until 31 October 2025), the Secretary-General will recommend 40 members for appointment by the General Assembly.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global Dialogue on AI Governance officially launched

On 25 September 2025, the President of the UN General Assembly chaired a high-level multistakeholder informal meeting to launch the Global Dialogue on AI Governance.

The creation of the Dialogue was agreed by UN member states in September 2024, with the adoption of the Global Digital Compact. In August 2025, the General Assembly adopted a resolution outlining the terms of reference and modalities for this new global mechanism.

The Global Dialogue on AI Governance is tasked with facilitating open, transparent and inclusive discussions on AI governance. Issues to focus on will include safe, trustworthy AI; bridging capacity and digital divides; social, ethical, and technical implications; interoperability of governance approaches; human rights; transparency and accountability; and open-source AI development.

The Dialogue will meet annually for up to two days alongside UN conferences in Geneva or New York, featuring high-level government participation, thematic discussions, and an annual report presentation. Initially, it will be held back-to-back in the margins of the International Telecommunication Union Artificial Intelligence for Good Global Summit in Geneva, in 2026, and of the multistakeholder forum on science, technology and innovation for the SDGs in New York, in 2027.

Speaking at the launch of the Dialogue, the UN Secretary-General noted that the Dialogue is ‘about creating a space where governments, industry and civil society can advance common solutions together.  Where innovation can thrive — guided by shared standards and common purpose.’

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazil to host massive AI-ready data centre by RT-One

RT-One plans to build Latin America’s largest AI data centre after securing land in Uberlândia, Minas Gerais, Brazil. The US$1.2bn project will span over one million square metres, with 300,000 m² reserved as protected green space.

The site will support high-performance computing, sovereign cloud services, and AI workloads, launching with 100MW capacity and scaling to 400MW. It will run on 100% renewable energy and utilise advanced cooling systems to minimise its environmental impact.

RT-One states that the project will prepare Brazil to compete globally, generate skilled jobs, and train new talent for the digital economy. A wide network of partners, including Hitachi, Siemens, WEG, and Schneider Electric, is collaborating on the development, aiming to ensure resilience and sustainability at scale.

The project is expected to stimulate regional growth, with jobs, training programmes, and opportunities for collaboration between academia and industry. Local officials, including the mayor of Uberlândia, attended the launch event to underline government support for the initiative.

Once complete, the Uberlândia facility will provide sovereign cloud capacity, high-density compute, and AI-ready infrastructure for Brazil and beyond. RT-One says the development will position the city as a hub for digital innovation and strengthen Latin America’s role in the global AI economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini’s image model powers Google’s new Mixboard platform

Google has launched Mixboard, an experimental AI tool designed to help users visually explore, refine, and expand ideas both textually and visually. The Gemini 2.5 Flash model powers the platform and is now available for free in beta for users in the United States.

Mixboard provides an open canvas where users can begin with pre-built templates or custom prompts to create project boards. It can be used for tasks such as planning events, home decoration, or organising inspirational images, presenting an overall mood for a project.

Users can upload their own images or generate new ones by describing what they want to see. The tool supports iterative editing, allowing minor tweaks or combining visuals into new compositions through Google’s Nano Banana image model.

Quick actions like regenerating and others like this enable users to explore variations with a single click. The tool also allows text generation based on context from images placed on the board, helping tie visuals to written ideas.

Google says Mixboard is part of its push to make Gemini more useful for creative work. Since the launch of Nano Banana in August, the Gemini app has overtaken ChatGPT to rank first in the US App Store.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK to introduce mandatory digital ID for work

The UK government has announced plans to make digital ID mandatory for proving the right to work by the end of the current Parliament, expected no later than 2029. Prime Minister Sir Keir Starmer said the scheme would tighten controls on illegal employment while offering wider benefits for citizens.

The digital ID will be stored on smartphones in a format similar to contactless payment cards or the NHS app. It is expected to include core details such as name, date of birth, nationality or residency status, and a photo.

The system aims to provide a more consistent and secure alternative to paper-based checks, reducing the risk of forged documents and streamlining verification for employers.

Officials believe the scheme could extend beyond employment, potentially simplifying access to driving licences, welfare, childcare, and tax records.

A consultation later in the year will decide whether additional data, such as residential addresses, should be integrated. The government has also pledged accessibility for citizens unable to use smartphones.

The proposal has faced political opposition, with critics warning of privacy risks, administrative burdens, and fears of creating a de facto compulsory ID card system.

Despite these objections, the government argues that digital ID will strengthen border controls, counter the shadow economy, and modernise public service access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Content Signals Policy by Cloudflare lets websites signal data use preferences

Cloudflare has announced the launch of its Content Signals Policy, a new extension to robots.txt that allows websites to express their preferences for how their data is used after access. The policy is designed to help creators maintain open content while preventing misuse by data scrapers and AI trainers.

The new tool enables website owners to specify, in a machine-readable format, whether they permit search indexing, AI input, or AI model training. Operators can set each signal to ‘yes,’ ‘no,’ or leave it blank to indicate no stated preference, providing them with fine-grained control over their responses.

Cloudflare says the policy tackles the free-rider problem, where scraped content is reused without credit. With bot traffic set to surpass human traffic by 2029, it calls for clear, standard rules to protect creators and keep the web open.

Customers already using Cloudflare’s managed robots.txt will have the policy automatically applied, with a default setting that allows search but blocks AI training. Sites without a robots.txt file can opt in to publish the human-readable policy text and add their own preferences when ready.

Cloudflare emphasises that content signals are not enforcement mechanisms but a means of communicating expectations. It is releasing the policy under a CC0 licence to encourage broad adoption and is working with standards bodies to ensure the rules are recognised across the industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets up expert commission to speed up NHS adoption of AI

Doctors, researchers and technology leaders will work together to accelerate the safe adoption of AI in the NHS, under a new commission launched by the Medicines and Healthcare products Regulatory Agency (MHRA).

The body will draft recommendations to modernise healthcare regulation, ensuring patients gain faster access to innovations while maintaining safety and public trust.

MHRA stressed that clear rules are vital as AI spreads across healthcare, already helping to diagnose conditions such as lung cancer and strokes in hospitals across the UK.

Backed by ministers, the initiative aims to position Britain as a global hub for health tech investment. Companies including Google and Microsoft will join clinicians, academics, and patient advocates to advise on the framework, expected to be published next year.

A commission that will also review the regulatory barriers slowing adoption of tools such as AI-driven note-taking systems, which early trials suggest can significantly boost efficiency in clinical care.

Officials say the framework will provide much-needed clarity for AI in radiology, pathology, and virtual care, supporting the digital transformation of NHS.

MHRA chief executive Lawrence Tallon called the commission a ‘cultural shift’ in regulation. At the same time, Technology Secretary Liz Kendall said it will ensure patients benefit from life-saving technologies ‘quickly and safely’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Expanded AI model support arrives in Microsoft 365 Copilot

Microsoft is expanding the AI models powering Microsoft 365 Copilot by adding Anthropic’s Claude Sonnet 4 and Claude Opus 4.1. Customers can now choose between OpenAI and Anthropic models for research, deep reasoning, and agent building across Microsoft 365 tools.

The Researcher agent can now run on Anthropic’s Claude Opus 4.1, giving users a choice of models for in-depth analysis. The Researcher draws on web sources, trusted third-party data, and internal work content—encompassing emails, chats, meetings, and files—to deliver tailored, multistep reasoning.

Claude Sonnet 4 and Opus 4.1 are also available in Copilot Studio, enabling the creation of enterprise-grade agents with flexible model selection. Users can mix Anthropic, OpenAI, and Azure Model Catalogue models to power multi-agent workflows, automate tasks, and manage agents efficiently.

Claude in Researcher is rolling out today to Microsoft 365 Copilot-licensed customers through the Frontier Program. Customers can also use Claude models in Copilot Studio to build and orchestrate agents.

Microsoft says this launch is part of its strategy to bring the best AI innovation across the industry to Copilot. More Anthropic-powered features will roll out soon, strengthening Copilot’s role as a hub for enterprise AI and workflow transformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!