Preserving languages in a digital world: A call for inclusive action

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a powerful session on the critical need to protect multilingualism in the digital age. With over 8,000 languages spoken globally but fewer than 120 represented online, the panel warned of a growing digital divide that excludes billions and marginalises thousands of cultures.

Dr Tawfik Jelassi of UNESCO painted a vivid metaphor of the internet as a vast library where most languages have no books on the shelves, calling for urgent action to safeguard humanity’s linguistic and cultural diversity.

Speakers underscored that bridging this divide goes beyond creating language tools—it requires systemic change rooted in policy, education, and community empowerment. Guilherme Canela of UNESCO highlighted ongoing initiatives like the 2003 Recommendation on Multilingualism and the UN Decade of Indigenous Languages, which has already inspired 15 national action plans.

Panellists like Valts Ernstreits and Sofiya Zahova emphasised community-led efforts, citing examples from Latvia, Iceland, and Sámi institutions that show how native speakers and local institutions must lead digital inclusion efforts.

Africa’s case brought the urgency into sharp focus. David Waweru noted that despite hosting a third of the world’s languages, less than 0.1% of websites feature African language content. Yet, promising efforts like the African Storybook project and AI language models show how local storytelling and education can thrive in digital spaces.

Elena Plexida of ICANN revealed that only 26% of email servers accept non-Latin addresses, a stark reminder of the structural barriers to full digital participation.

The session concluded with a strong call for multistakeholder collaboration. Governments, tech companies, indigenous communities, and civil society must work together to make multilingualism the default, not the exception, in digital spaces. As Jelassi put it, ensuring every language has a place online is not just a technical challenge but a matter of cultural survival and digital justice.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

East Meets West: Reimagining education in the age of AI

At the WSIS+20 High-Level Event in Geneva, the session ‘AI (and) education: Convergences between Chinese and European pedagogical practices’ brought together educators, students, and industry experts to examine how AI reshapes global education.

Led by Jovan Kurbalija of Diplo and Professor Hao Liu of Beijing Institute of Technology (BIT), with industry insights from Deloitte’s Norman Sze, the discussion focused on the future of universities and the evolving role of professors amid rapid AI developments.

Drawing on philosophical traditions from Confucius to Plato, the session emphasised the need for a hybrid approach that preserves the human essence of learning while embracing technological transformation.

Professor Liu showcased BIT’s ‘intelligent education’ model, a human-centred system integrating time, space, knowledge, teachers, and students. Moving beyond rigid, exam-focused instruction, BIT promotes creativity and interdisciplinary learning, empowering students with flexible academic paths and digital tools.

Jovan Kurbalija at WSIS+20 High-Level Event 2025
Jovan Kurbalija, Executive Director of Diplo

Meanwhile, Norman Sze highlighted how AI has accelerated industry workflows and called for educational alignment with real-world demands. He argued for reorienting learning around critical thinking, ethical literacy, and collaboration—skills that AI cannot replicate and remain central to personal and professional growth.

A key theme was whether teachers and universities remain relevant in an AI-driven future. Students from around the world contributed compelling reflections: AI may offer efficiency, but it cannot replace the emotional intelligence, mentorship, and meaning-making that only human educators provide.

As one student said, ‘I don’t care about ChatGPT—it’s not human.’ The group reached a consensus: professors must shift from ‘sages on the stage’ to ‘guides on the side,’ coaching students through complexity rather than merely transmitting knowledge.

The session closed on an optimistic note, asserting that while AI is a powerful catalyst for change, the heart of education lies in human connection, dialogue, and the ability to ask the right questions. Participants agreed that a truly forward-looking educational model will emerge not from choosing between East and West or human and machine, but from integrating the best of all to build a more inclusive and insightful future of learning.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Google hit with EU complaint over AI Overviews

After a formal filing by the Independent Publishers Alliance, Google has faced an antitrust complaint in the European Union over its AI Overviews feature.

The group alleges that Google has been using web content without proper consent to power its AI-generated summaries, causing considerable harm to online publishers.

The complaint claims that publishers have lost traffic, readers and advertising revenue due to these summaries. It also argues that opting out of AI Overviews is not a real choice unless publishers are prepared to vanish entirely from Google’s search results.

AI Overviews were launched over a year ago and now appear at the top of many search queries, summarising information using AI. Although the tool has expanded rapidly, critics argue it drives users away from original publisher websites, especially news outlets.

Google has responded by stating its AI search tools allow users to ask more complex questions and help businesses and creators get discovered. The tech giant also insisted that web traffic patterns are influenced by many factors and warned against conclusions based on limited data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Regions seek role in EU hospital cyber strategy

The European Commission’s latest plan to strengthen hospital cybersecurity has drawn attention from regional authorities across the EU, who say they were excluded from key decisions.

Their absence, they argue, could weaken the strategy’s overall effectiveness.

With cyberattacks on healthcare systems growing, regional representatives insist they should have a seat at the table.

As those directly managing hospitals and public health, they warn that top-down decisions may overlook urgent local challenges and lead to poorly matched policies.

The Commission’s plan includes creating a dedicated health cybersecurity centre under the EU Agency for Cybersecurity (ENISA) and setting up an EU-wide threat alert system.

Yet doubts remain over how these goals will be met without extra funding or clear guidance on regional involvement.

The concerns point to the need for a more collaborative approach that values regional knowledge.

Without it, the EU risks designing cybersecurity protections that fail to reflect the realities inside Europe’s hospitals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LFR tech helps catch dangerous offenders, but Liberty urges legal safeguards

Live facial recognition (LFR) technology used by the Metropolitan Police has led to more than 1,000 arrests, including dangerous offenders wanted for serious crimes, such as rape, robbery and child protection breaches.

Among those arrested was David Cheneler, 73, a registered sex offender spotted by LFR cameras in Camberwell, south London. He was found with a young girl and later jailed for two years for breaching a sexual harm prevention order.

Another arrest included Adenola Akindutire, linked to a machete robbery in Hayes that left a man with life-changing injuries. Stopped during an LFR operation in Stratford, he was carrying a false passport and admitted to several violent offences.

LFR also helped identify Darren Dubarry, 50, who was wanted for theft. He was stopped with stolen designer goods after passing an LFR-equipped van in east London.

The Met says the technology has helped arrest over 100 people linked to serious violence against women and girls, including domestic abuse, stalking, and strangulation.

Lindsey Chiswick, who leads the Met’s LFR work, said the system is helping deliver justice more efficiently, calling it a ‘powerful tool’ that is removing dangerous offenders from the streets of London.

While police say biometric data is not retained for those not flagged, rights groups remain concerned. Liberty says nearly 1.9 million faces were scanned between January 2022 and March 2024, and is calling for new laws to govern police use of facial recognition.

Charlie Whelton of Liberty said the tech risks infringing rights and must be regulated. ‘We shouldn’t leave police forces to come up with frameworks on their own,’ he warned, urging Parliament to legislate before further deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ari Aster warns of AI’s creeping normality ahead of Eddington release

Ari Aster, the director behind Hereditary and Midsommar, is sounding the alarm on AI. In a recent Letterboxd interview promoting his upcoming A24 film Eddington, Aster described his growing unease with AI.

He framed it as a quasi-religious force reshaping reality in ways that are already irreversible. ‘If you talk to these engineers… they talk about AI as a god,’ said Aster. ‘They’re very worshipful of this thing. Whatever space there was between our lived reality and this imaginal reality — that’s disappearing.’

Aster’s comments suggest concern not just about the technology, but about the mindset surrounding its development. Eddington, set during the COVID-19 pandemic, is a neo-Western dark comedy.
It stars Joaquin Phoenix and Pedro Pascal as a sheriff and a mayor locked in a bitter digital feud.

The film reflects Aster’s fears about the dehumanising impact of modern technology. He drew from the ideas of media theorist Marshall McLuhan, referencing his phrase: ‘Man is the sex organ of the machine world.’ Aster asked, ‘Is this technology an extension of us, are we extensions of this technology, or are we here to usher it into being?’

The implication is clear: AI may not simply assist humanity—it might define it. Aster’s films often explore existential dread and loss of control. His perspective on AI taps into similar fears, but in real life. ‘The most uncanny thing about it is that it’s less uncanny than I want it to be,’ he said.

‘I see AI-generated videos, and they look like life. The longer we live in them, the more normal they become.’ The normalisation of artificial content strikes at the core of Aster’s unease. It also mirrors recent tensions in Hollywood over AI’s role in creative industries.

In 2023, WGA and SAG-AFTRA fought for protections against AI-generated scripts and likenesses. Their strike shut down the industry for months, but won language limiting AI use.

The battles highlighted the same issue Aster warns of—losing artistic agency to machines. ‘What happens when content becomes so seamless, it replaces real creativity?’ he seems to ask.

‘Something huge is happening right now, and we have no say in it,’ he said. ‘I can’t believe we’re actually going to live through this and see what happens. Holy cow.’ Eddington is scheduled for release in the United States on 18 July 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Veo 3 video for Gemini users globally

Google has begun rolling out its Veo 3 video-generation model to Gemini users across more than 159 countries. The advanced AI tool allows subscribers to create short video clips simply by entering text prompts.

Access to Veo 3 is limited to those on Google’s AI Pro plan, and usage is currently restricted to three videos per day. The tool can generate clips lasting up to eight seconds, enabling rapid video creation for a variety of purposes.

Google is already developing additional features for Gemini, including the ability to turn images into videos, according to product director Josh Woodward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse in schools raises legal and ethical concerns

Deepfake abuse is emerging as a troubling form of peer-on-peer harassment in schools, targeting mainly girls with AI-generated explicit imagery. Tools that once required technical skill are now easily accessible to young people, allowing harmful content to be created and shared in seconds.

Though all US states and Washington, D.C. have laws addressing the distribution of nonconsensual intimate images, many do not cover AI-generated content or address the fact that minors are often both victims and perpetrators.

Some states have begun adapting laws to include proportional sentencing and behavioural interventions for minors. Advocates argue that education on AI, consent and digital literacy is essential to address the root causes and help young people understand the consequences of their actions.

Regulating tech platforms and app developers is also key, as companies continue to profit from tools used in digital exploitation. Experts say schools, families, lawmakers and platforms must share responsibility for curbing the spread of AI-generated abuse and ensuring support for those affected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

United brings facial recognition to Seattle airport

United Airlines has rolled out facial recognition at Seattle-Tacoma International Airport, allowing TSA PreCheck passengers to pass through security without ID or boarding passes. This service uses facial recognition to match real-time images with government-provided ID photos during the check-in process.

Seattle is the tenth US airport to adopt the system, following its launch at Chicago O’Hare in 2023. Alaska Airlines and Delta have also introduced similar services at Sea-Tac, signalling a broader shift toward biometric travel solutions.

The TSA’s Credential Authentication Technology was introduced at the airport in October and supports this touchless approach. Experts say facial recognition could soon be used throughout the airport journey, from bag drop to retail purchases.

TSA PreCheck access remains limited to US citizens, nationals, and permanent residents, with a five-year membership costing $78. As more airports adopt facial recognition, concerns about privacy and consent are likely to increase.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!