At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts gathered to discuss how to involve diverse communities—especially indigenous and underrepresented groups—better in the technical governance of the internet. The session, led by Niger’s Anne Rachel Inne, emphasised that meaningful participation requires more than token inclusion; it demands structural reforms and practical engagement tools.
Central to the dialogue was the role of multilingualism, which UNESCO’s Guilherme Canela de Souza described as both a right and a necessity for true digital inclusion. ICANN’s Theresa Swinehart spotlighted ‘Universal Acceptance’ as a tangible step toward digital equality, ensuring that domain names and email addresses work in all languages and scripts.
Real-world examples, like hackathons with university students in Bahrain, showcased how digital cooperation can bridge technical skills and community needs. Meanwhile, Valts Ernstreits from Latvia shared how international engagement helped elevate the status of the Livonian language at home, proving that global advocacy can yield local policy wins.
The workshop addressed persistent challenges to inclusion: from bureaucratic hurdles that exclude indigenous communities to the lack of connections between technical and policy realms. Panellists agreed that real change hinges on collaboration, mentorship, and tools that meet people where they are, like WhatsApp groups and local capacity-building networks.
Participants also highlighted UNESCO’s roadmap for multilingualism and ICANN’s upcoming domain name support program as critical opportunities for further action. In a solution-oriented close, speakers urged continued efforts to make digital spaces more representative.
They underscored the need for long-term investment in community-driven infrastructure and policies that reflect the internet’s global diversity. The message was clear: equitable internet governance can only be achieved when all voices—across languages, regions, and technical backgrounds—are heard and empowered.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.
One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.
Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.
Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.
Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.
Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum 2025 in Lillestrøm, Norway, a key session spotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. Backed by 21 countries and counting, the statement outlines a vision for human-centric AI governance rooted in international human rights law.
Representatives from governments, civil society, and the tech industry—most notably the Netherlands, Germany, Ghana, Estonia, and Microsoft—gathered to emphasise the urgent need for a collective, multistakeholder approach to tackle the real and present risks AI poses to rights such as privacy, freedom of expression, and democratic participation.
Ambassador Ernst Noorman of the Netherlands warned that human rights and security must be viewed as interconnected, stressing that unregulated AI use can destabilise societies rather than protect them. His remarks echoed the Netherlands’ own hard lessons from biassed welfare algorithms.
Other panellists, including Germany’s Cyber Ambassador Maria Adebahr, underlined how AI is being weaponised for transnational repression and emphasised Germany’s commitment by doubling funding for the FOC. Ghana’s cybersecurity chief, Divine Salese Agbeti, added that AI misuse is not exclusive to governments—citizens, too, have exploited the technology for manipulation and deception.
From the private sector, Microsoft’s Dr Erika Moret showcased the company’s multi-layered approach to embedding human rights in AI, from ethical design and impact assessments to rejecting high-risk applications like facial recognition in authoritarian contexts. She stressed the company’s alignment with UN guiding principles and the need for transparency, fairness, and inclusivity.
The discussion also highlighted binding global frameworks like the EU AI Act and the Council of Europe’s Framework Convention, calling for their widespread adoption as vital tools in managing AI’s global impact. The session concluded with a shared call to action: governments must use regulatory tools and procurement power to enforce human rights standards in AI, while the private sector and civil society must push for accountability and inclusion.
The FOC’s statement remains open for new endorsements, standing as a foundational text in the ongoing effort to align the future of AI with the fundamental rights of all people.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks.
China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration.
Echoing her call, speakers highlighted that AI’s rapid evolution requires national regulations and coordinated global governance, ideally under the auspices of the UN.
Speakers, such as Jovan Kurbalija, executive director of Diplo, and Wolfgang Kleinwächter, emeritus professor for Internet Policy and Regulation at the University of Aarhus, warned against the pitfalls of siloed regulation and technological protectionism. Instead, they advocated for open-source standards, inclusive policymaking, and leveraging existing internet governance models to shape AI rules.
Regional case studies from Shanghai and Mexico illustrated diverse governance approaches—ranging from rights-based regulation to industrial ecosystem building—while initiatives like China Mobile’s AI+ Global Solutions showcased the role of major industry actors. A recurring theme throughout the forum was that no single stakeholder can monopolise effective AI governance.
Instead, a multistakeholder approach involving governments, civil society, academia, and the private sector is essential. Participants agreed that the goal is not just to manage risks, but to ensure AI is developed and deployed in a way that is ethical, inclusive, and beneficial to all humanity.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Since 2015, 21 June marks the International Day of Yoga, celebrating the ancient Indian practice that blends physical movement, breathing, and meditation. But as the world becomes increasingly digital, yoga itself is evolving.
No longer limited to ashrams or studios, yoga today exists on mobile apps, YouTube channels, and even in virtual reality. On the surface, this democratisation seems like a triumph. But what are the more profound implications of digitising a deeply spiritual and embodied tradition? And how do emerging technologies, particularly AI, reshape how we understand and experience yoga in a hyper-connected world?
Tech and wellness: The rise of AI-driven yoga tools
The wellness tech market has exploded, and yoga is a major beneficiary. Apps like Down Dog, YogaGo, and Glo offer personalised yoga sessions, while wearables such as the Apple Watch or Fitbit track heart rate and breathing.
Meanwhile, AI-powered platforms can generate tailored yoga routines based on user preferences, injury history, or biometric feedback. For example, AI motion tracking tools can evaluate your poses in real-time, offering corrections much like a human instructor.
While these tools increase accessibility, they also raise questions about data privacy, consent, and the commodification of spiritual practices. What happens when biometric data from yoga sessions is monetised? Who owns your breath and posture data? These questions sit at the intersection of AI ethics and digital rights.
Beyond the mat: Virtual reality and immersive yoga
The emergence of virtual reality (VR) and augmented reality (AR) is pushing the boundaries of yoga practice. Platforms like TRIPP or Supernatural offer immersive wellness environments where users can perform guided meditation and yoga in surreal, digitally rendered landscapes.
These tools promise enhanced focus and escapism—but also risk detachment from embodied experience. Does VR yoga deepen the meditative state, or does it dilute the tradition by gamifying it? As these technologies grow in sophistication, we must question how presence, environment, and embodiment translate in virtual spaces.
Can AI be a guru? Empathy, authority, and the limits of automation
One provocative question is whether AI can serve as a spiritual guide. AI instructors—whether through chatbots or embodied in VR—may be able to correct your form or suggest breathing techniques. But can they foster the deep, transformative relationship that many associate with traditional yoga masters?
AI lacks emotional intuition, moral responsibility, and cultural embeddedness. While it can mimic the language and movements of yoga, it struggles to replicate the teacher-student connection that grounds authentic practice. As AI becomes more integrated into wellness platforms, we must ask: where do we draw the line between assistance and appropriation?
Community, loneliness, and digital yoga tribes
Yoga has always been more than individual practice—community is central. Yet, as yoga moves online, questions of connection and belonging arise. Can digital communities built on hashtags and video streams replicate the support and accountability of physical sanghas (spiritual communities)?
Paradoxically, while digital yoga connects millions, it may also contribute to isolation. A solitary practice in front of a screen lacks the energy, feedback, and spontaneity of group practice. For tech developers and wellness advocates, the challenge is to reimagine digital spaces that foster authentic community rather than algorithmic echo chambers.
Digital policy and the politics of platformised spirituality
Beyond the individual experience, there’s a broader question of how yoga operates within global digital ecosystems. Platforms like YouTube, Instagram, and TikTok have turned yoga into shareable content, often stripped of its philosophical and spiritual roots.
Meanwhile, Big Tech companies capitalise on wellness trends while contributing to stress-inducing algorithmic environments. There are also geopolitical and cultural considerations.
The export of yoga through Western tech platforms often sidesteps its South Asian origins, raising issues of cultural appropriation. From a policy perspective, regulators must grapple with how spiritual practices are commodified, surveilled, and reshaped by AI-driven infrastructures.
Toward inclusive and ethical design in wellness tech
As AI and digital tools become more deeply embedded in yoga practice, there is a pressing need for ethical design. Developers should consider how their platforms accommodate different bodies, abilities, cultures, and languages. For example, how can AI be trained to recognise non-normative movement patterns? Are apps accessible to users with disabilities?
Inclusive design is not only a matter of social justice—it also aligns with yogic principles of compassion, awareness, and non-harm. Embedding these values into AI development can help ensure that the future of yoga tech is as mindful as the practice it seeks to support.
Toward a mindful tech future
As we celebrate International Day of Yoga, we are called to reflect not only on the practice itself but also on its evolving digital context. Emerging technologies offer powerful tools for access and personalisation, but they also risk diluting the depth and ethics of yoga.
For policymakers, technologists, and practitioners alike, the challenge is to ensure that yoga in the digital age remains a practice of liberation rather than a product of algorithmic control. Yoga teaches awareness, balance, and presence. These are the very qualities we need to shape responsible digital policies in an AI-driven world.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
NATO’s 76th summit opened in The Hague amid rising tensions in Europe and the Middle East, overshadowed by conflict and cyber threats. Leaders gathered as rushers in Ukraine dragged on, and Israel’s strikes on Iran further strained global stability.
European NATO members pledged greater defence spending, but divisions with the US over security commitments and strategy persisted. The summit also highlighted concerns about hybrid threats, with cyberespionage and sabotage by Russia-linked groups remaining a pressing issue.
According to European intelligence agencies, Russian cyber operations targeting critical infrastructure and government networks have intensified. NATO leaders face pressure to enhance collective cyber deterrence, with pro-Russian hacktivists expected to exploit summit declarations in future campaigns.
While Europe pushes to reduce reliance on the US security umbrella, uncertainty over Washington’s focus and support continues. Many fear the summit may end without concrete decisions as the alliance grapples with external threats and internal discord.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant multistakeholder session spotlighted the ethical dilemmas of AI in journalism and digital content. The event was hosted by R&W Media and introduced the Haarlem Declaration, a global initiative to promote responsible AI practices in digital media.
Central to the discussion was unveiling an ‘ethical AI checklist,’ designed to help organisations uphold human rights, transparency, and environmental responsibility while navigating AI’s expanding role in content creation. Speakers emphasised a people-centred approach to AI, advocating for tools that support rather than replace human decision-making.
Ernst Noorman, the Dutch Ambassador for Cyber Affairs, called for AI policies rooted in international human rights law, highlighting Europe’s Digital Services and AI Acts as potential models. Meanwhile, grassroots organisations from the Global South shared real-world challenges, including algorithmic bias, language exclusions, and environmental impacts.
Taysir Mathlouthi of Hamleh detailed efforts to build localised AI models in Arabic and Hebrew, while Nepal’s Yuva organisation, represented by Sanskriti Panday, explained how small NGOs balance ethical use of generative tools like ChatGPT with limited resources. The Global Forum for Media Development’s Laura Becana Ball introduced the Journalism Cloud Alliance, a collective aimed at making AI tools more accessible and affordable for newsrooms.
Despite enthusiasm, participants acknowledged hurdles such as checklist fatigue, lack of capacity, and the need for AI literacy training. Still, there was a shared sense of urgency and optimism, with the consensus that ethical frameworks must be embedded from the outset of AI development and not bolted on as an afterthought.
In closing, organisers invited civil society and media groups to endorse the Harlem Declaration and co-create practical tools for ethical AI governance. While challenges remain, the forum set a clear agenda: ethical AI in media must be inclusive, accountable, and co-designed by those most affected by its implementation.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum 2025 in Lillestrøm, Norway, global stakeholders converged to shape the future of digital governance by aligning the Internet Governance Forum (IGF) with the World Summit on the Information Society (WSIS) Plus 20 review and the Global Digital Compact (GDC) follow-up. Moderated by Yoichi Iida, former Vice Minister at Japan’s Ministry of Internal Affairs and Communications, the session featured high-level representatives from governments, international organisations, the business sector, and youth networks, all calling for a stronger, more inclusive, better-resourced IGF.
William Lee, WSIS Plus 20 Policy Lead for the Australian Government, emphasised the need for sustainable funding, tighter integration between global and national IGF processes, and the creation of ‘communities of practice.’ Philipp Schulte from Germany’s Ministry of Education, Digital Transformation and Government Modernisation echoed these goals, adding proposals such as appointing an IGF director and establishing an informal multistakeholder sounding board.
The European Union’s unified stance also prioritised long-term mandate renewal and structural support for inclusive participation. Speaking online, Gitanjali Sah, Strategy and Policy Coordinator at the International Telecommunication Union (ITU), argued that WSIS frameworks already offer the tools to implement GDC goals, while stressing the urgency of addressing global connectivity gaps.
Maarit Palovirta, Deputy Director General at Connect Europe, represented the business sector, lauding the IGF as an accessible forum for private sector engagement and advocating for continuity and simplicity in governance processes. Representing over 40 youth IGFs globally, Murillo Salvador emphasised youth inclusion, digital literacy, online well-being, and co-ownership in policymaking as core pillars for future success.
Across all groups, there was strong agreement on the urgency of bridging digital divides, supporting grassroots voices, and building a resilient, inclusive, and forward-looking IGF. The shared sentiment was clear: to ensure digital governance reflects the needs of all, the IGF must evolve boldly, inclusively, and collaboratively.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At a packed session during Day 0 of the Internet Governance Forum 2025 in Lillestrøm, Norway, civil society leaders gathered to strategise how the upcoming WSIS+20 review can deliver on the promise of digital rights and justice. Organised by the Global Digital Justice Forum and the Global Digital Rights Coalition for WSIS, the brainstorming session brought together voices from across the globe to assess the ‘elements paper’ recently issued by review co-facilitators from Albania and Kenya.
Anna Oosterlinck of ARTICLE 19 opened the session by noting significant gaps in the current draft, especially in its treatment of human rights and multistakeholder governance.
Ellie McDonald of Global Partners Digital, speaking on behalf of the Global Digital Rights Coalition, presented the group’s three strategic pillars: anchoring digital policy in international human rights law, reinforcing multistakeholder governance based on São Paulo guidance, and strengthening WSIS institutions like the Internet Governance Forum. She warned that current policy language risks drifting away from established human rights commitments and fails to propose concrete steps for institutional resilience.
Nandini Chami of the Global Digital Justice Forum outlined their campaign’s broader structural agenda, including a call for an integrated human rights framework fit for the digital age, safeguarding the internet as a global commons, ensuring sustainable digital transitions, and creating a fair international digital economy that combats digital colonialism. She stressed the importance of expanding rights protections to include people affected by AI and data practices, even those not directly online.
Zach Lampell from the International Centre for Not-for-Profit Law closed the session with a stark reminder: those who control internet infrastructure hold immense power over how digital rights are exercised. He and others urged participants to provide feedback by 15 July through an open consultation process, emphasising the need for strong, unified civil society input. The organising coalitions committed to publishing a summary paper to advance advocacy ahead of the final WSIS+20 outcome document.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
At the Internet Governance Forum 2025 in Lillestrøm, Norway, the IGF Support Association convened a critical session addressing the long-term sustainability of National and Regional Internet Initiatives (NRIs). With over 170 NRIs worldwide playing a key role in connecting local voices to global internet policy, participants discussed how a potential renewal of the IGF’s UN mandate might influence their operations.
While many, including internet pioneer Vint Cerf, welcomed the idea of institutional stability through UN backing, most agreed it wouldn’t automatically resolve the chronic funding and legitimacy challenges NRIs face on the ground. A recurring concern was the disconnect between expectations and resources.
After nearly two decades, most NRIs still operate on volunteer labour despite being expected to deliver professional-level outcomes. Sandra Hoferichter of EuroDIG warned that this grassroots model is reaching a breaking point, echoing others who called for more stable secretariats and professional staffing.
Joyce Chen and Fiona Asonga emphasised the importance of formalising multistakeholder teams to prevent initiatives from collapsing when key individuals depart. Funding strategies were a central theme, with diverse models discussed—from partnerships with ccTLD managers and technical communities to modest support from national governments.
Yet securing sustainable private sector investment remains difficult, partly because the IGF’s non-decisional format makes it a harder sell to business. Several speakers stressed the need to articulate clear value propositions, especially for big tech companies that benefit from an open and stable internet but often contribute little to maintaining its governance structures.
The session closed with a consensus that real sustainability demands more than money: NRIs need legitimacy, inclusivity, and a deeper integration with national policymaking. Proposals ranged from establishing parliamentary tracks to expanding sub-national IGFs, all with the aim of grounding internet governance in local realities while keeping it globally connected.
Why does it matter?
Despite unresolved questions, the mood remained constructive, with calls to continue the conversation and co-develop innovative models for the next chapter of grassroots digital governance.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.