Humanitarian, peace, and media sectors join forces to tackle harmful information

At the WSIS+20 High-Level Event in Geneva, a powerful session brought together humanitarian, peacebuilding, and media development actors to confront the growing threat of disinformation, more broadly reframed as ‘harmful information.’ Panellists emphasised that false or misleading content, whether deliberately spread or unintentionally harmful, can have dire consequences for already vulnerable populations, fueling violence, eroding trust, and distorting social narratives.

The session moderator, Caroline Vuillemin of Fondation Hirondelle, underscored the urgency of uniting these sectors to protect those most at risk.

Hans-Peter Wyss of the Swiss Agency for Development and Cooperation presented the ‘triple nexus’ approach, advocating for coordinated interventions across humanitarian, development, and peacebuilding efforts. He stressed the vital role of trust, institutional flexibility, and the full inclusion of independent media as strategic actors.

Philippe Stoll of the ICRC detailed an initiative that focuses on the tangible harms of information—physical, economic, psychological, and societal—rather than debating truth. That initiative, grounded in a ‘detect, assess, respond’ framework, works from local volunteer training up to global advocacy and research on emerging challenges like deepfakes.

Donatella Rostagno of Interpeace shared field experiences from the Great Lakes region, where youth-led efforts to counter misinformation have created new channels for dialogue in highly polarised societies. She highlighted the importance of inclusive platforms where communities can express their own visions of peace and hear others’.

Meanwhile, Tammam Aloudat of The New Humanitarian critiqued the often selective framing of disinformation, urging support for local journalism and transparency about political biases, including the harm caused by omission and silence.

The session concluded with calls for sustainable funding and multi-level coordination, recognising that responses must be tailored locally while engaging globally. Despite differing views, all panellists agreed on the need to shift from a narrow focus on disinformation to a broader and more nuanced understanding of information harm, grounded in cooperation, local agency, and collective responsibility.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Preserving languages in a digital world: A call for inclusive action

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a powerful session on the critical need to protect multilingualism in the digital age. With over 8,000 languages spoken globally but fewer than 120 represented online, the panel warned of a growing digital divide that excludes billions and marginalises thousands of cultures.

Dr Tawfik Jelassi of UNESCO painted a vivid metaphor of the internet as a vast library where most languages have no books on the shelves, calling for urgent action to safeguard humanity’s linguistic and cultural diversity.

Speakers underscored that bridging this divide goes beyond creating language tools—it requires systemic change rooted in policy, education, and community empowerment. Guilherme Canela of UNESCO highlighted ongoing initiatives like the 2003 Recommendation on Multilingualism and the UN Decade of Indigenous Languages, which has already inspired 15 national action plans.

Panellists like Valts Ernstreits and Sofiya Zahova emphasised community-led efforts, citing examples from Latvia, Iceland, and Sámi institutions that show how native speakers and local institutions must lead digital inclusion efforts.

Africa’s case brought the urgency into sharp focus. David Waweru noted that despite hosting a third of the world’s languages, less than 0.1% of websites feature African language content. Yet, promising efforts like the African Storybook project and AI language models show how local storytelling and education can thrive in digital spaces.

Elena Plexida of ICANN revealed that only 26% of email servers accept non-Latin addresses, a stark reminder of the structural barriers to full digital participation.

The session concluded with a strong call for multistakeholder collaboration. Governments, tech companies, indigenous communities, and civil society must work together to make multilingualism the default, not the exception, in digital spaces. As Jelassi put it, ensuring every language has a place online is not just a technical challenge but a matter of cultural survival and digital justice.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Women researchers showcase accessibility breakthroughs at WSIS

At the WSIS+20 High-Level Event 2025 in Geneva, the session titled ‘Media and Education for All: Bridging Female Academic Leaders and Society towards Impactful Results’ spotlighted how female academic experts are applying AI to make media and education more inclusive and accessible. Organised by the AXS-CAT network at Universitat Autònoma de Barcelona and moderated by Dr Anita Lamprecht from Diplo, the session showcased a range of innovative projects that translate university research into real-world impact.

One highlight was the ENACT project, presented by Professor Ana Matamala, which develops simplified news content to serve audiences such as migrants, people with intellectual disabilities, and language learners. While 13 European organisations already offer some easy-to-understand content, challenges remain in maintaining journalistic integrity while ensuring accessibility.

Meanwhile, Professor Pilar Orero unveiled three AI-driven projects: Mosaic, a searchable public broadcaster archive hub; Alfie, which tackles AI bias in media; and a climate change initiative focused on making scientific data more comprehensible to the public. Several education-centred projects also took the stage.

Dr Estella Oncins introduced the Inclusivity project, which uses virtual reality to engage neurodiverse students and promote inclusive teaching methods. Dr Mireia Farrus presented Scribal, a real-time AI-powered transcription and translation tool for university lectures, tailored to support Catalan language users and students with hearing impairments.

Additionally, Dr Mar Gutierrez Colon shared two accessibility tools: a gamified reading app for children in Kenya and an English language test adapted for students with special educational needs. During the Q&A, discussions turned to the challenges of teaching fast-evolving technologies like AI, especially given the scarcity of qualified educators.

The speakers emphasised that digital accessibility is not just a technical concern but a matter of educational justice, advocating for stronger collaboration between academia and industry to ensure inclusive learning opportunities for all.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

East Meets West: Reimagining education in the age of AI

At the WSIS+20 High-Level Event in Geneva, the session ‘AI (and) education: Convergences between Chinese and European pedagogical practices’ brought together educators, students, and industry experts to examine how AI reshapes global education.

Led by Jovan Kurbalija of Diplo and Professor Hao Liu of Beijing Institute of Technology (BIT), with industry insights from Deloitte’s Norman Sze, the discussion focused on the future of universities and the evolving role of professors amid rapid AI developments.

Drawing on philosophical traditions from Confucius to Plato, the session emphasised the need for a hybrid approach that preserves the human essence of learning while embracing technological transformation.

Professor Liu showcased BIT’s ‘intelligent education’ model, a human-centred system integrating time, space, knowledge, teachers, and students. Moving beyond rigid, exam-focused instruction, BIT promotes creativity and interdisciplinary learning, empowering students with flexible academic paths and digital tools.

Jovan Kurbalija at WSIS+20 High-Level Event 2025
Jovan Kurbalija, Executive Director of Diplo

Meanwhile, Norman Sze highlighted how AI has accelerated industry workflows and called for educational alignment with real-world demands. He argued for reorienting learning around critical thinking, ethical literacy, and collaboration—skills that AI cannot replicate and remain central to personal and professional growth.

A key theme was whether teachers and universities remain relevant in an AI-driven future. Students from around the world contributed compelling reflections: AI may offer efficiency, but it cannot replace the emotional intelligence, mentorship, and meaning-making that only human educators provide.

As one student said, ‘I don’t care about ChatGPT—it’s not human.’ The group reached a consensus: professors must shift from ‘sages on the stage’ to ‘guides on the side,’ coaching students through complexity rather than merely transmitting knowledge.

The session closed on an optimistic note, asserting that while AI is a powerful catalyst for change, the heart of education lies in human connection, dialogue, and the ability to ask the right questions. Participants agreed that a truly forward-looking educational model will emerge not from choosing between East and West or human and machine, but from integrating the best of all to build a more inclusive and insightful future of learning.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

ChatGPT use among students raises concerns over critical thinking

A university lecturer in the United States says many students are increasingly relying on ChatGPT to write essays—even about the ethics of AI—raising concerns about critical thinking in higher education.

Dr Jocelyn Leitzinger from the University of Illinois noticed that nearly half of her 180 students used the tool inappropriately last semester. Some submissions even repeated generic names like ‘Sally’ in personal anecdotes, hinting at AI-generated content.

A recent preprint study by researchers at MIT appears to back those concerns. In a small experiment involving 54 adult learners, those who used ChatGPT produced essays with weaker content and less brain activity, as recorded by EEG headsets.

Researchers found that 80% of the AI-assisted group could not recall anything from their essay afterwards. In contrast, the ‘brain-only’ group—those who wrote without assistance—performed better in both comprehension and neural engagement.

Despite some media headlines suggesting that ChatGPT makes users lazy or less intelligent, the researchers stress the need for caution. They argue more rigorous studies are required to understand how AI affects learning and thinking.

Educators say the tool’s polished writing often lacks originality and depth. One student admitted using ChatGPT for ideas and lecture summaries but drew the line at letting it write his assignments.

Dr Leitzinger worries that relying too heavily on AI skips essential steps in learning. ‘Writing is thinking, thinking is writing,’ she said. ‘When we eliminate that process, what does that mean for thinking?’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI suite expands to help teachers plan and students learn

Google has unveiled a major expansion of its Gemini AI tools tailored for classroom use, launching over 30 features to support teachers and students. These updates include personalised AI-powered lesson planning, content generation, and interactive study guides.

Teachers can now create custom AI tutors, known as ‘Gems’, to assist students with specific academic needs using their own teaching materials. Google’s AI reading assistant is also gaining real-time support features through the Read Along tool in Classroom, enhancing literacy development for younger users.

Students and teachers will benefit from wider access to Google Vids, the company’s video creation app, enabling them to create instructional content and complete multimedia assignments.

Additional features aim to monitor student progress, manage AI permissions, improve data security, and streamline classroom content delivery using new Class tools.

By placing AI directly into the hands of educators, Google aims to offer more engaging and responsive learning, while keeping its tools aligned with classroom goals and policies. The rollout continues Google’s bid to take the lead in the evolving AI-driven edtech space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hacktivist attacks surge in Iran–Israel tensions

The Iran–Israel conflict has now expanded into cyberspace, with rival hacker groups launching waves of politically driven attacks.

Following Israel’s military operation against Iran, pro-Israeli hackers known as ‘Predatory Sparrow‘ struck Iran’s Sepah Bank, deleting data and causing significant service disruption.

A day later, the same group targeted Nobitex, Iran’s largest crypto exchange, stealing and destroying over $90 million in assets.

Cyber attacks intensified in the days before and after Israeli strikes. According to NSFOCUS, cyberattacks on Iran peaked three days before the military operation, suggesting pre-attack reconnaissance.

In retaliation, pro-Iranian hackers escalated attacks on Israel on 16 June, focusing on government systems, aerospace, and education.

While attacks on Iran have been fewer, Israeli systems have faced over 1,300 attacks in 2025 alone, with 37% of all global hacktivist activity aimed at Israel since the conflict began.

However, analysts note these attacks have been high in volume but limited in impact. Their malware tactics involve evading antivirus software, deleting data, and turning off recovery systems.

NSFOCUS warns that geopolitical tensions are turning hacktivist groups into informal cyber proxies. Though not formally state-backed, these loosely organised actors align closely with national interests.

As traditional defences lag, cybersecurity experts argue that national infrastructure must adopt more strategic, coordinated defence measures instead of fragmented responses, especially during crises and conflicts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija’s book on internet governance turns 20 with new life at IGF

At the Internet Governance Forum 2025 in Lillestrøm, Norway, Jovan Kurbalija launched the eighth edition of his seminal textbook ‘Introduction to Internet Governance’, marking a return to writing after a nine-year pause. Moderated by Sorina Teleanu of the Diplo, the session unpacked not just the content of the new edition but also the reasoning behind retaining its original title in an era buzzing with buzzwords like ‘AI governance’ and ‘digital governance.’

Kurbalija defended the choice, arguing that most so-called digital issues—from content regulation to cybersecurity—ultimately operate over internet infrastructure, making ‘Internet governance’ the most precise term available.

The updated edition reflects both continuity and adaptation. He introduced ‘Kaizen publishing,’ a new model that replaces the traditional static book cycle with a continuously updated digital platform. Driven by the fast pace of technological change and aided by AI tools trained on his own writing style, the new format ensures the book evolves in real-time with policy and technological developments.

Jovan book launch

The new edition is structured as a seven-floor pyramid tackling 50 key issues rooted in history and future internet governance trajectories. The book also traces digital policy’s deep historical roots.

Kurbalija highlighted how key global internet governance frameworks—such as ICANN, the WTO e-commerce moratorium, and UN cyber initiatives—emerged within months of each other in 1998, a pivotal moment he calls foundational to today’s landscape. He contrasted this historical consistency with recent transformations, identifying four key shifts since 2016: mass data migration to the cloud, COVID-19’s digital acceleration, the move from CPUs to GPUs, and the rise of AI.

Finally, the session tackled the evolving discourse around AI governance. Kurbalija emphasised the need to weigh long-term existential risks against more immediate challenges like educational disruption and concentrated knowledge power. He also critiqued the shift in global policy language—from knowledge-centric to data-driven frameworks—and warned that this transformation might obscure AI’s true nature as a knowledge-based phenomenon.

As geopolitics reasserts itself in digital governance debates, Kurbalija’s updated book aims to ground readers in the enduring principles shaping an increasingly complex landscape.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI and the future of work: Global forum highlights risks, promise, and urgent choices

At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.

AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.

AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.

Joseph Gordon Levitt at IGF 2025

Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.

He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.

Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.

The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.