WSIS+20 highlights deep gaps in global digital access

Twenty years after the World Summit on the Information Society (WSIS) laid the foundations for global digital cooperation, UN member states gathered in New York to assess what has been achieved and what still lies out of reach. The WSIS+20 High-Level Meeting of the UN General Assembly highlighted how deeply digital technologies now shape everyday life, while also exposing the uneven distribution of their benefits across societies and regions.

Despite major progress in connectivity, speakers warned that the world faces not a digital ‘gap’ but a digital ‘canyon’. While most people live within reach of mobile broadband, more than two billion remain offline, predominantly in developing countries.

Delegations stressed that meaningful digital inclusion depends not only on networks, but also on affordability, skills, institutions, and the ability to participate fully in the digital economy and public life.

Gender inequality emerged as one of the most urgent concerns. Women remain significantly less likely to be online than men, and digital harms disproportionately affect them, from exclusion from economic opportunities to widespread gender-based abuse enabled by new technologies.

Participants underlined that closing the gender digital divide is not only a matter of rights and justice, but also a major economic opportunity with global benefits.

AI featured prominently, with broad agreement that AI must be governed in a human-centred and rights-based way. Several speakers warned of a growing ‘AI divide’, driven by unequal access to computing power, data, and linguistic representation. Concerns were raised that AI systems risk reinforcing existing inequalities unless global cooperation ensures that emerging technologies serve public interests rather than deepen exclusion.

Debates over internet governance revealed both strong consensus and sharp geopolitical tensions. Most countries reaffirmed support for the multistakeholder model and called for strengthening the Internet Governance Forum, including making it a permanent UN platform with sustainable funding.

At the same time, disagreements surfaced over state control, sovereignty, and the future institutional architecture of global digital governance.

Looking ahead, the meeting underscored that digital transformation is no longer just a technical issue but a deeply political one, tied to human rights, development, security, and power. While the original WSIS principles remain widely supported, participants agreed that renewed ambition, financing, and cooperation are essential to ensure that digital technologies, including AI, deliver tangible benefits for all, rather than widening the divides they were meant to close.

Diplo and the Geneva Internet Platform will provide just-in-time reporting from the high-level meeting. Bookmark this page.

For more details about WSIS and the 20-year review, consult our WSIS+20 process dedicated page.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK launches taskforce to boost women in tech

The UK government has formed a Women in Tech taskforce to help more women enter, remain and lead across the technology sector. Technology secretary Liz Kendall will guide the group alongside industry figures determined to narrow long-standing representation gaps highlighted by recent BCS data.

Members include Anne-Marie Imafidon, Allison Kirkby and Francesca Carlesi, who will advise ministers on boosting diversity and supporting economic growth. Leaders stress that better representation enables more inclusive decision-making and encourages technology built with wider perspectives in mind.

The taskforce plans to address barriers affecting women’s progression, ranging from career access to investment opportunities. Organisations such as techUK and the Royal Academy of Engineering argue that gender imbalance limits innovation, particularly as the UK pursues ambitious AI goals.

UK officials expect working groups to develop proposals over the coming months, focusing on practical steps that broaden the talent pool. Advocates say the initiative arrives at a crucial moment as emerging technologies reshape employment and demand more inclusive leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Private surveillance raises concerns in New Orleans

New Orleans has become the first US city to use real time facial recognition through a privately operated system. The technology flags wanted individuals as they pass cameras, with alerts sent directly to police despite ongoing disputes between city officials.

A local non profit runs the network independently and sets its own guard rails for police cooperation. Advocates claim the arrangement limits bureaucracy, while critics argue it bypasses vital public oversight and privacy protections.

Debate over facial recognition has intensified nationwide as communities question accuracy, fairness and civil liberties. New Orleans now represents a major test case for how such tools may develop without clear government regulation.

Officials remain divided over long term consequences while campaigners warn of creeping surveillance risks. Residents are likely to face years of uncertainty as policies evolve and private systems grow more influential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN General Assembly to hold WSIS+20 high-level meeting

The UN will hold a high-level meeting of the General Assembly on 16–17 December 2025 to conclude the WSIS+20 review, marking 20 years since the World Summit on the Information Society (WSIS) outlined a global vision for an inclusive and people-centred information society. The review assesses the progress made by countries and stakeholders in implementing the WSIS outcomes agreed upon in Geneva in 2003 and in Tunis in 2005.

The WSIS+20 process examines the progress made over the past two decades while also identifying remaining challenges, including persistent digital divides, gaps in access to information and communication technologies (ICTs), and the need to harness digital tools more effectively for sustainable development. The high-level meeting will feature four plenary sessions with statements from UN member states, observers, and other stakeholders, in line with a recent General Assembly resolution.

A key outcome of the meeting will be the adoption of a final WSIS+20 outcome document, which will reflect on achievements so far and outline priorities for future action. Alongside the main sessions, a series of in-person, virtual, and off-site side events starting on 15 December 2025 will showcase innovations, share experiences, highlight emerging digital issues, and announce voluntary commitments aimed at strengthening an inclusive and development-oriented information society.

Diplo and the Geneva Internet Platform will provide just-in-time reporting from the high-level meeting. Bookmark this page; more details will be available soon.

For more details about WSIS and the 20-year review, consult our WSIS+20 process dedicated page.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

No sensitive data compromised in SoundCloud incident

SoundCloud has confirmed a recent security incident that temporarily affected platform availability and involved the limited exposure of user data. The company detected unauthorised activity on an ancillary service dashboard and acted immediately to contain the situation.

Third-party cybersecurity experts were engaged to investigate and support the response. The incident resulted in two brief denial-of-service attacks, temporarily disrupting web access.

Approximately 20% of users were affected; however, no sensitive data, such as passwords or financial details, were compromised. Only email addresses and publicly visible profile information were involved.

In response, SoundCloud has strengthened its systems, enhancing monitoring, reviewing identity and access controls, and auditing related systems. Some configuration updates have led to temporary VPN connectivity issues, which the company is working to resolve.

SoundCloud emphasises that user privacy remains a top priority and encourages vigilance against phishing. The platform will continue to provide updates and take steps to minimise the risk of future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Streaming platforms face pressure over AI-generated music

Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.

Fans initially congratulated her on a release she had not made since 2022.

Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’

A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.

AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.

Industry representatives note that the primary motive is to collect royalties from unsuspecting users.

Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns that LLMs are vulnerable to minimal tampering

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute have shown that only a few hundred crafted samples can poison LLM models. The tests revealed that around 250 malicious entries could embed a backdoor that triggers gibberish responses when a specific phrase appears.

Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.

Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.

Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google boosts Translate with Gemini upgrades

Google is rolling out a major Translate upgrade powered by Gemini to improve text and speech translation. The update enhances contextual understanding so idioms, tone and intent are interpreted more naturally.

A beta feature for live headphone translation enables real-time speech-to-speech output. Gemini processes audio directly, preserving cadence and emphasis to improve conversations and lectures. Android users in the US, Mexico and India gain early access, with wider availability planned for 2026.

Translate is also gaining expanded language-learning tools for speaking practice and progress tracking. Additional language pairs, including English to German and Portuguese, broaden support for learners worldwide.

Google aims to reduce friction in global communication by focusing on meaning rather than literal phrasing. Engineers expect user feedback to shape the AI live translation beta across platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Building trustworthy AI for humanitarian response

A new vision for Humanitarian AI is emerging around a simple idea, and that is that technology should grow from local knowledge if it is to work everywhere. Drawing on the IFRC’s slogan ‘Local, everywhere,’ this approach argues that AI should not be driven by hype or raw computing power, but by the lived experience of communities and humanitarian workers on the ground. With millions of volunteers and staff worldwide, the Red Cross and Red Crescent Movement holds a vast reservoir of practical knowledge that AI can help preserve, organise, and share for more effective crisis response.

In a recent blog post, Jovan Kurbalija explains that this bottom-up approach is not only practical but also ethically sound. AI systems grounded in local humanitarian knowledge can better reflect cultural and social contexts, reduce bias and misinformation, and strengthen trust by being governed by humanitarian organisations rather than opaque commercial platforms. Trust, he argues, lies in people and institutions behind the technology, not in algorithms themselves.

Kurbalija also notes that developing such AI is technically and financially realistic. Open-source models, mobile and edge computing, and domain-specific AI tools enable the deployment to functional systems even in low-resource environments. Most humanitarian tasks, from decision support to translation or volunteer guidance, do not require massive infrastructure, but high-quality, well-structured knowledge rooted in real-world experience.

If developed carefully, Humanitarian AI could also support the IFRC’s broader renewal goals, from strengthening local accountability and collaboration to safeguarding independence and humanitarian principles. Starting with small pilot projects and scaling up gradually, the Movement could transform AI into a shared public good that not only enhances responses to today’s crises but also preserves critical knowledge for future generations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI tools enable large-scale monetisation of political misinformation in the UK

YouTube channels spreading fake and inflammatory anti-Labour videos have attracted more than a billion views this year, as opportunistic creators use AI-generated content to monetise political division in the UK.

Research by non-profit group Reset Tech identified more than 150 channels promoting hostile narratives about the Labour Party and Prime Minister Keir Starmer. The study found the channels published over 56,000 videos, gaining 5.3 million subscribers and nearly 1.2 billion views in 2025.

Many videos used alarmist language, AI-generated scripts and British-accented narration to boost engagement. Starmer was referenced more than 15,000 times in titles or descriptions, often alongside fabricated claims of arrests, political collapse or public humiliation.

Reset Tech said the activity reflects a wider global trend driven by cheap AI tools and engagement-based incentives. Similar networks were found across Europe, although UK-focused channels were mostly linked to creators seeking advertising revenue rather than foreign actors.

YouTube removed all identified channels after being contacted, citing spam and deceptive practices as violations of its policies. Labour officials warned that synthetic misinformation poses a serious threat to democratic trust, urging platforms to act more quickly and strengthen their moderation systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!