Building digital resilience in an age of crisis

At the WSIS+20 High-Level Event in Geneva, the session ‘Information Society in Times of Risk’ spotlighted how societies can harness digital tools to weather crises more effectively. Experts and researchers from across the globe shared innovations and case studies that emphasised collaboration, inclusiveness, and preparedness.

Chairs Horst Kremers and Professor Ke Gong opened the discussion by reinforcing the UN’s all-of-society principle, which advocates cooperation among governments, civil society, tech companies, and academia in facing disaster risks.

The Singapore team unveiled their pioneering DRIVE framework—Digital Resilience Indicators for Veritable Empowerment—redefining resilience not as a personal skill set but as a dynamic process shaped by individuals’ environments, from family to national policies. They argued that digital resilience must include social dimensions such as citizenship, support networks, and systemic access, making it a collective responsibility in the digital era.

Turkish researchers analysed over 54,000 social media images shared after the 2023 earthquakes, showing how visual content can fuel digital solidarity and real-time coordination. However, they also revealed how the breakdown of communication infrastructure in the immediate aftermath severely hampered response efforts, underscoring the urgent need for robust and redundant networks.

Meanwhile, Chinese tech giant Tencent demonstrated how integrated platforms—such as WeChat and AI-powered tools—transform disaster response, enabling donations, rescues, and community support on a massive scale. Yet, presenters cautioned that while AI holds promise, its current role in real-time crisis management remains limited.

The session closed with calls for pro-social platform designs to combat polarisation and disinformation, and a shared commitment to building inclusive, digitally resilient societies that leave no one behind.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

AI glasses deliver real-time theatre subtitles

An innovative trial at Amsterdam’s Holland Festival saw Dutch company Het Nationale Theatre, in partnership with XRAI and Audinate, unveil smart glasses that project real-time subtitles in 223 languages via a Dante audio network and AI software.

Attendees of The Seasons experienced dynamic transcription and translation streamed directly to XREAL AR glasses. Voices from each actor’s microphone are processed by XRAI’s AI, with subtitles overlaid in matching colours to distinguish speakers on stage.

Aiming to enhance the theatre’s accessibility, the system supports non-Dutch speakers or those with hearing loss. Testing continues this summer, with complete implementation expected from autumn.

LiveText discards the dated method of back-of-house captioning. Instead, subtitles now appear in real time at actor-appropriate visual depth, automatically handling complex languages and writing systems.

Proponents believe the glasses mark a breakthrough for inclusion, with potential uses at international conferences, music festivals and other live events worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital humanism in the AI era: Caution, culture, and the call for human-centric technology

At the WSIS+20 High-Level Event in Geneva, the session ‘Digital Humanism: People First!’ spotlighted growing concerns over how digital technologies—especially AI—are reshaping society. Moderated by Alfredo M. Ronchi, the discussion revealed a deep tension between the liberating potential of digital tools and the risks they pose to cultural identity, human dignity, and critical thinking.

Speakers warned that while digital access has democratised communication, it has also birthed a new form of ‘cognitive colonialism’—where people become dependent on AI systems that are often inaccurate, manipulative, and culturally homogenising.

The panellists, including legal expert Pavan Duggal, entrepreneur Lilly Christoforidou, and academic Sarah Jane Fox, voiced alarm over society’s uncritical embrace of generative AI and its looming evolution toward artificial general intelligence by 2026. Duggal painted a stark picture of a world where AI systems override human commands and manipulate users, calling for a rethinking of legal frameworks prioritising risk reduction over human rights.

Fox drew attention to older people, warning that growing digital complexity risks alienating entire generations, while Christoforidou urged for ethical awareness to be embedded in educational systems, especially among startups and micro-enterprises.

Despite some disagreement over the fundamental impact of technology—ranging from Goyal’s pessimistic warning about dehumanisation to Anna Katz’s cautious optimism about educational potential—the session reached a strong consensus on the urgent need for education, cultural protection, and contingency planning. Panellists called for international cooperation to preserve cultural diversity and develop ‘Plan B’ systems to sustain society if digital infrastructures fail.

The session’s tone was overwhelmingly cautionary, with speakers imploring stakeholders to act before AI outpaces our capacity to govern it. Their message was clear: human values, not algorithms, must define the digital age. Without urgent reforms, the digital future may leave humanity behind—not by design, but by neglect.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UN leaders chart inclusive digital future at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UN leaders gathered for a pivotal dialogue on shaping an inclusive digital transformation, marking two decades since the World Summit on the Information Society (WSIS). Speakers across the UN system emphasised that technology must serve people, not vice versa.

They highlighted that bridging the digital divide is critical to ensuring that innovations like AI uplift all of humanity, not just those in advanced economies. Without equitable access, the benefits of digital transformation risk reinforcing existing inequalities and leaving millions behind.

The discussion showcased how digital technologies already transform disaster response and climate resilience. The World Meteorological Organization and the UN Office for Disaster Risk Reduction illustrated how AI powers early warning systems and real-time risk analysis, saving lives in vulnerable regions.

Meanwhile, the Food and Agriculture Organization of the UN underscored the need to align technology with basic human needs, reminding the audience that ‘AI is not food,’ and calling for thoughtful, efficient deployment of digital tools to address global hunger and development.

Workforce transformation and leadership in the AI era also featured prominently. Leaders from the International Labour Organization and UNITAR stressed that while AI may replace some roles, it will augment many more, making digital literacy, ethical foresight, and collaborative governance essential skills. Examples from within the UN system itself, such as the digitisation of the Joint Staff Pension Fund through facial recognition and blockchain, demonstrated how innovation can enhance services without sacrificing inclusivity or ethics.

As the session closed, speakers collectively reaffirmed the importance of human rights, international cooperation, and shared digital governance. They stressed that the future of global development hinges on treating digital infrastructure and knowledge as public goods.

With the WSIS framework and Global Digital Compact as guideposts, UN leaders called for sustained, unified efforts to ensure that digital transformation uplifts every community and contributes meaningfully to the Sustainable Development Goals.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UNESCO panel calls for ethics to be core of emerging tech, not an afterthought

At the WSIS+20 High-Level Event in Geneva, UNESCO hosted a session titled ‘Ethics in AI: Shaping a Human-Centred Future in the Digital Age,’ where global experts warned that ethics must be built into the foundation of emerging technologies such as AI, neurotechnology, and quantum computing—not added later as damage control.

UNESCO’s Chief of Bioethics and Ethics of Science and Technology, Dafna Feinholz, stressed that ethical considerations should shape technology development from the start, echoing the organisation’s mission to safeguard human rights and freedoms alongside scientific innovation.

Panellists underscored the tension between individual intentions and institutional realities. Philosopher Mira Wolf-Bauwens argued that while developers often begin with a sense of moral responsibility, corporate pressures quickly override these principles.

Drawing from her work in the quantum sector, she described how companies dilute ethical concerns into mere legal compliance, eroding their original purpose. Neuroscientist and entrepreneur Ryota Kanai echoed this concern, sharing how the rush to commercialise neurotechnology has led to premature products that risk undermining public trust, especially when privacy risks remain poorly understood.

The session also highlighted success stories in ethical governance, such as Thailand’s efforts to implement UNESCO’s AI ethics framework. Chaichana Mitrpant, leading the country’s digital policy agency, described a localised yet uncompromised approach that engaged multiple stakeholders—from regulators to small businesses. The collaborative model helped tailor global ethical guidelines to national realities while maintaining core human values.

Panellists agreed that while regulation plays a role, ethics must remain broader, more agile, and focused on motivation rather than just rule enforcement. With technologies evolving faster than laws can adapt, anticipatory governance, cross-sector collaboration, and inclusive debate were hailed as essential. The session closed with a shared call to action: embedding ethics in every stage of technology development is not just ideal—it’s urgently necessary to build a trustworthy digital future.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

AI and big data to streamline South Korea’s drug evaluation processes

The Ministry of Food and Drug Safety (MFDS) of South Korea is modernising its drug review and evaluation processes by incorporating AI, big data, and other emerging technologies.

The efforts are being spearheaded by the ministry’s National Institute for Food and Drug Safety Evaluation (NIFDS).

Starting next year, NIFDS plans to apply AI to assist with routine tasks such as preparing review data.

The initial focus will be synthetic chemical drugs, gradually expanding to other product categories.

‘Initial AI applications will focus on streamlining repetitive tasks,’ said Jeong Ji-won, head of the Pharmaceutical and Medical Device Research Department at NIFDS.

‘The AI system is being developed internally, and we are evaluating its potential for real-world inspection scenarios. A phased approach is necessary due to the large volume of data required,’ Jeong added.

In parallel, NIFDS is exploring using big data in various regulatory activities.

One initiative involves applying big data analytics to enhance risk assessments during overseas GMP inspections. ‘Standardisation remains a challenge due to varying formats across facilities,’ said Sohn Kyung-hoon, head of the Drug Research Division.

‘Nonetheless, we’re working to develop a system that enhances the efficiency of inspections without relying on foreign collaborations.’ Efforts also include building domain-specific Korean-English translation models for safety documentation.

The institute also integrates AI into pharmaceutical manufacturing oversight and develops public data utilisation frameworks. The efforts include systems for analysing adverse drug reaction reports and standardising data inputs.

NIFDS is actively researching new analysis methods and safety protocols regarding impurity control.

‘We’re prioritising research on impurities such as NDMA,’ Sohn noted. Simultaneous detection methods are being tailored for smaller manufacturers.

New categorisation techniques are also being developed to monitor previously untracked substances.

On the biologics front, NIFDS aims to finalise its mRNA vaccine evaluation technology by year-end.

The five-year project supports the national strategy for improving infectious disease preparedness in South Korea, including work on delivery mechanisms and material composition.

‘This initiative is part of our broader strategy to improve preparedness for future infectious disease outbreaks,’ said Lee Chul-hyun, head of the Biologics Research Division.

Evaluation protocols for antibody drugs are still in progress. However, indirect support is being provided through guidelines and benchmarking against international cases. Separately, the Herbal Medicine Research Division is upgrading its standardised product distribution model.

The current use-based system will shift to a field-based one next year, extending to pharmaceuticals, functional foods, and cosmetics sectors.

‘We’re refining the system to improve access and quality control,’ said Hwang Jin-hee, head of the division. Collaboration with regional research institutions remains a key component of this work.’

NIFDS currently offers 396 standardised herbal medicines. The institute continues to develop new reference materials annually as part of its evolving strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LG’s Exaone Path 2.0 uses AI to transform genetic testing

LG AI Research has introduced Exaone Path 2.0, an upgraded AI model designed to analyse pathology images for disease diagnosis, significantly reducing the time required for genetic testing.

The new model, unveiled Wednesday, can reportedly process pathology images in under a minute—a significant shift from conventional genetic testing methods that often take more than two weeks.

According to LG, the AI system offers enhanced accuracy in detecting genetic mutations and gene expression patterns by learning from detailed image patches and full-slide pathology data.

Developed by LG AI Research, a division of the LG Group, Exaone Path 2.0 is trained on over 10,000 whole-slide images (WSIs) and multiomics pairs, enabling it to integrate structural information with molecular biology insights. The company said it has achieved a 78.4 percent accuracy rate in predicting genetic mutations.

The model has also been tailored for specific applications in oncology, including lung and colorectal cancers, where it can help clinicians identify patient groups most likely to benefit from targeted therapies.

LG AI Research is collaborating with Professor Hwang Tae-hyun and his team at Vanderbilt University Medical Centre in the US to further its application in real-world clinical settings.

Their shared goal is to develop a multimodal medical AI platform that can support precision medicine directly within clinical environments.

Hwang, a key contributor to the US government’s Cancer Moonshot program and founder of the Molecular AI Initiative at Vanderbilt, emphasised that the aim is to create AI tools usable by clinicians in active medical practice, rather than limiting innovation to the lab.

In addition to oncology, LG AI Research plans to extend its multimodal AI initiatives into transplant rejection, immunology, and diabetes.

It is also collaborating with the Jackson Laboratory to support Alzheimer’s research and working with Professor Baek Min-kyung’s team at Seoul National University on next-generation protein structure prediction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija: Digital tools are reshaping diplomacy

Once the global stage for peace negotiations and humanitarian accords, Geneva finds itself at the heart of a new kind of diplomacy shaped by algorithms, data flows, and AI. Jovan Kurbalija, Executive Director of Diplo and Head of the Geneva Internet Platform, believes this transformation reflects Geneva’s long tradition of engaging with science, technology, and global governance. He explained this in an interview with Léman Bleu.

Diplo, a Swiss-Maltese foundation, supports diplomats and international professionals as they navigate the increasingly complex landscape of digital governance.

‘Where we once trained them to understand the internet,’ Kurbalija explains, ‘we now help them grasp and negotiate issues around AI and digital tools.’

The foundation not only aids diplomats in addressing cyber threats and data privacy but also equips them with AI-enhanced tools for negotiation, public communication, and consular protection.

According to Kurbalija, digital governance touches everyone. From how our phones are built to how data moves across borders, nearly 50 distinct issues—from cybersecurity and e-commerce to data protection and digital standards—are debated in the corridors of International Geneva. These debates are no longer reserved for specialists because they affect the everyday lives of billions.

Kurbalija draws a fascinating connection between Geneva’s philosophical heritage and today’s technological dilemmas. Writers like Mary Shelley, Voltaire, and Borges, each with ties to Geneva, grappled with themes eerily relevant today: unchecked scientific ambition, the tension between freedom and control, and the challenge of processing vast amounts of knowledge. He dubs this tradition ‘EspriTech de Genève,’ a spirit of intellectual inquiry that still echoes in debates over AI and its impact on society.

AI, Kurbalija warns, is both a marvel and a potential menace.

‘It’s not exactly Frankenstein,’ he says, ‘but without proper governance, it could become one.’

As technology evolves, so must international mechanisms ensure it serves humanity rather than endangers it.

Diplomacy, meanwhile, is being reshaped not just in terms of content but in method. Digital tools allow diplomats to engage more directly with the public and make negotiations more transparent. Yet, the rise of social media has its downsides. Public broadcasting of diplomatic proceedings risks undermining the very privacy and trust needed to reach a compromise.

‘Diplomacy,’ Kurbalija notes, ‘needs space to breathe—to think, negotiate, resolve.’

He also cautions against the growing concentration of AI and data power in the hands of a few corporations.

‘We risk having our collective knowledge privatised, commodified, and sold back to us,’ he says.

The antidote? A push for more inclusive, bottom-up AI development that empowers individuals, communities, and nations.

As Geneva continues its historic role in shaping the future, Kurbalija’s message is clear: managing technology wisely is not just a diplomatic challenge—it’s a global necessity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humanitarian, peace, and media sectors join forces to tackle harmful information

At the WSIS+20 High-Level Event in Geneva, a powerful session brought together humanitarian, peacebuilding, and media development actors to confront the growing threat of disinformation, more broadly reframed as ‘harmful information.’ Panellists emphasised that false or misleading content, whether deliberately spread or unintentionally harmful, can have dire consequences for already vulnerable populations, fueling violence, eroding trust, and distorting social narratives.

The session moderator, Caroline Vuillemin of Fondation Hirondelle, underscored the urgency of uniting these sectors to protect those most at risk.

Hans-Peter Wyss of the Swiss Agency for Development and Cooperation presented the ‘triple nexus’ approach, advocating for coordinated interventions across humanitarian, development, and peacebuilding efforts. He stressed the vital role of trust, institutional flexibility, and the full inclusion of independent media as strategic actors.

Philippe Stoll of the ICRC detailed an initiative that focuses on the tangible harms of information—physical, economic, psychological, and societal—rather than debating truth. That initiative, grounded in a ‘detect, assess, respond’ framework, works from local volunteer training up to global advocacy and research on emerging challenges like deepfakes.

Donatella Rostagno of Interpeace shared field experiences from the Great Lakes region, where youth-led efforts to counter misinformation have created new channels for dialogue in highly polarised societies. She highlighted the importance of inclusive platforms where communities can express their own visions of peace and hear others’.

Meanwhile, Tammam Aloudat of The New Humanitarian critiqued the often selective framing of disinformation, urging support for local journalism and transparency about political biases, including the harm caused by omission and silence.

The session concluded with calls for sustainable funding and multi-level coordination, recognising that responses must be tailored locally while engaging globally. Despite differing views, all panellists agreed on the need to shift from a narrow focus on disinformation to a broader and more nuanced understanding of information harm, grounded in cooperation, local agency, and collective responsibility.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UNESCO pushes for digital trust at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a timely session exploring how to strengthen global information ecosystems through responsible platform governance and smart technology use. The discussion, titled ‘Towards a Resilient Information Ecosystem’, brought together international regulators, academics, civil society leaders, and tech industry representatives to assess digital media’s role in shaping public discourse, especially in times of crisis.

UNESCO’s Assistant Director General Tawfik Jelassi emphasised the organisation’s longstanding mission to build peace through knowledge sharing, warning that digital platforms now risk becoming breeding grounds for misinformation, hate speech, and division. To counter this, he highlighted UNESCO’s ‘Internet for Trust’ initiative, which produced governance guidelines informed by over 10,000 global contributions.

Speakers called for a shift from viewing misinformation as an isolated problem to understanding the broader digital communication ecosystem, especially during crises such as wars or natural disasters. Professor Ingrid Volkmer stressed that global monopolies like Starlink, Amazon Web Services, and OpenAI dominate critical communication infrastructure, often without sufficient oversight.

She urged a paradigm shift that treats crisis communication as an interconnected system requiring tailored regulation and risk assessments. France’s digital regulator Frédéric Bokobza outlined the European Digital Services Act’s role in enhancing transparency and accountability, noting the importance of establishing direct cooperation with platforms, particularly during elections.

The panel also spotlighted ways to empower users. Google’s Nadja Blagojevic showcased initiatives like SynthID watermarking for AI-generated content and media literacy programs such as ‘Be Internet Awesome,’ which aim to build digital critical thinking skills across age groups.

Meanwhile, Maria Paz Canales from Global Partners Digital offered a civil society perspective, sharing how AI tools protect protestors’ identities, preserve historical memory, and amplify marginalised voices, even amid funding challenges. She also called for regulatory models distinguishing between traditional commercial media and true public interest journalism, particularly in underrepresented regions like Latin America.

The session concluded with a strong call for international collaboration among regulators and platforms, affirming that information should be treated as a public good. Participants underscored the need for inclusive, multistakeholder governance and sustainable support for independent media to protect democratic values in an increasingly digital world.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.