The UN will hold a high-level meeting of the General Assembly on 16–17 December 2025 to conclude the WSIS+20 review, marking 20 years since the World Summit on the Information Society (WSIS) outlined a global vision for an inclusive and people-centred information society. The review assesses the progress made by countries and stakeholders in implementing the WSIS outcomes agreed upon in Geneva in 2003 and in Tunis in 2005.
The WSIS+20 process examines the progress made over the past two decades while also identifying remaining challenges, including persistent digital divides, gaps in access to information and communication technologies (ICTs), and the need to harness digital tools more effectively for sustainable development. The high-level meeting will feature four plenary sessions with statements from UN member states, observers, and other stakeholders, in line with a recent General Assembly resolution.
A key outcome of the meeting will be the adoption of a final WSIS+20 outcome document, which will reflect on achievements so far and outline priorities for future action. Alongside the main sessions, a series of in-person, virtual, and off-site side events starting on 15 December 2025 will showcase innovations, share experiences, highlight emerging digital issues, and announce voluntary commitments aimed at strengthening an inclusive and development-oriented information society.
Diplo and the Geneva Internet Platform will provide just-in-time reporting from the high-level meeting. Bookmark this page; more details will be available soon.
SoundCloud has confirmed a recent security incident that temporarily affected platform availability and involved the limited exposure of user data. The company detected unauthorised activity on an ancillary service dashboard and acted immediately to contain the situation.
Third-party cybersecurity experts were engaged to investigate and support the response. The incident resulted in two brief denial-of-service attacks, temporarily disrupting web access.
Approximately 20% of users were affected; however, no sensitive data, such as passwords or financial details, were compromised. Only email addresses and publicly visible profile information were involved.
In response, SoundCloud has strengthened its systems, enhancing monitoring, reviewing identity and access controls, and auditing related systems. Some configuration updates have led to temporary VPN connectivity issues, which the company is working to resolve.
SoundCloud emphasises that user privacy remains a top priority and encourages vigilance against phishing. The platform will continue to provide updates and take steps to minimise the risk of future incidents.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.
Fans initially congratulated her on a release she had not made since 2022.
Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’
A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.
AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.
Industry representatives note that the primary motive is to collect royalties from unsuspecting users.
Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.
Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.
Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is rolling out a major Translate upgrade powered by Gemini to improve text and speech translation. The update enhances contextual understanding so idioms, tone and intent are interpreted more naturally.
A beta feature for live headphone translation enables real-time speech-to-speech output. Gemini processes audio directly, preserving cadence and emphasis to improve conversations and lectures. Android users in the US, Mexico and India gain early access, with wider availability planned for 2026.
Translate is also gaining expanded language-learning tools for speaking practice and progress tracking. Additional language pairs, including English to German and Portuguese, broaden support for learners worldwide.
Google aims to reduce friction in global communication by focusing on meaning rather than literal phrasing. Engineers expect user feedback to shape the AI live translation beta across platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new vision for Humanitarian AI is emerging around a simple idea, and that is that technology should grow from local knowledge if it is to work everywhere. Drawing on the IFRC’s slogan ‘Local, everywhere,’ this approach argues that AI should not be driven by hype or raw computing power, but by the lived experience of communities and humanitarian workers on the ground. With millions of volunteers and staff worldwide, the Red Cross and Red Crescent Movement holds a vast reservoir of practical knowledge that AI can help preserve, organise, and share for more effective crisis response.
In a recent blog post, Jovan Kurbalija explains that this bottom-up approach is not only practical but also ethically sound. AI systems grounded in local humanitarian knowledge can better reflect cultural and social contexts, reduce bias and misinformation, and strengthen trust by being governed by humanitarian organisations rather than opaque commercial platforms. Trust, he argues, lies in people and institutions behind the technology, not in algorithms themselves.
Kurbalija also notes that developing such AI is technically and financially realistic. Open-source models, mobile and edge computing, and domain-specific AI tools enable the deployment to functional systems even in low-resource environments. Most humanitarian tasks, from decision support to translation or volunteer guidance, do not require massive infrastructure, but high-quality, well-structured knowledge rooted in real-world experience.
If developed carefully, Humanitarian AI could also support the IFRC’s broader renewal goals, from strengthening local accountability and collaboration to safeguarding independence and humanitarian principles. Starting with small pilot projects and scaling up gradually, the Movement could transform AI into a shared public good that not only enhances responses to today’s crises but also preserves critical knowledge for future generations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube channels spreading fake and inflammatory anti-Labour videos have attracted more than a billion views this year, as opportunistic creators use AI-generated content to monetise political division in the UK.
Research by non-profit group Reset Tech identified more than 150 channels promoting hostile narratives about the Labour Party and Prime Minister Keir Starmer. The study found the channels published over 56,000 videos, gaining 5.3 million subscribers and nearly 1.2 billion views in 2025.
Many videos used alarmist language, AI-generated scripts and British-accented narration to boost engagement. Starmer was referenced more than 15,000 times in titles or descriptions, often alongside fabricated claims of arrests, political collapse or public humiliation.
Reset Tech said the activity reflects a wider global trend driven by cheap AI tools and engagement-based incentives. Similar networks were found across Europe, although UK-focused channels were mostly linked to creators seeking advertising revenue rather than foreign actors.
YouTube removed all identified channels after being contacted, citing spam and deceptive practices as violations of its policies. Labour officials warned that synthetic misinformation poses a serious threat to democratic trust, urging platforms to act more quickly and strengthen their moderation systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Platform X has paid an administrative fine of nearly Rp80 million after failing to meet Indonesia’s content moderation requirements related to pornographic material, according to the country’s digital regulator.
The Ministry of Communication and Digital Affairs said the payment was made on 12 December 2025, after a third warning letter and further exchanges with the company. Officials confirmed that Platform X appointed a representative to complete the process, who is based in Singapore.
The regulator welcomed the company’s compliance, framing the payment as a demonstration of responsibility by an electronic system operator under Indonesian law. Authorities said the move supports efforts to keep the national digital space safe, healthy, and productive.
All funds were processed through official channels and transferred directly to the state treasury managed by the Ministry of Finance, in line with existing regulations, the ministry said.
Officials said enforcement actions against domestic and global platforms, including those operating from regional hubs such as Singapore, remain a priority. The measures aim to protect children and vulnerable groups and encourage stronger content moderation and communication.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.
The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.
Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.
US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.
The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.
A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.
Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.
Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!