Internet Governance Forum 2025
Session reports
Note: All listed times are in the CEST time zone.
Introduction
The Government of Norway hosted the 20th annual Internet Governance Forum (IGF) in Lillestrøm from 23 to 27 June 2025.
AI-enabled just-in-time reporting
We are pleased to share that Diplo is partnering with the IGF Secretariat and the Government of Norway (as host country) to deliver AI-enabled, just-in-time reporting from the IGF 2025 meeting.
Building on a decade of just-in-time IGF reporting, we will continue to provide timely and comprehensive coverage from the forum. Our reporting initiative will include session reports, an ‘Ask IGF 2025’ AI assistant, daily highlights, personalised reporting, and more.
Bookmark this page or download the Dig.Watch News+ app to learn more and stay up-to-date with our IGF session reports and newsletters.
Under the theme Building Governance Together, IGF 2025 marks the forum’s 20th anniversary and aims to shape the future of internet governance ahead of the WSIS+20 meeting later in the year. Norway is committed to fostering an open, secure, and inclusive internet by bringing together a wide range of voices within the multistakeholder model. The forum will feature diverse sessions and tracks designed to engage participants from governments, the private sector, civil society, academia, the technical community, and international organisations, from both developed and developing countries.
Daily Summaries
Visual Summary
Knowledge graph
Ask IGF 2025
Chatbot
Event Statistics
Total session reports: 230
Unique speakers
1249
Total speeches
169031
Total time
839538.2 min
9.0 days, 17.0 hours, 12.0 minutes, 18.0 seconds
Total length
1847223 words
1,847,223 words, or 3.15 ‘War and Peace’ books
Total
arguments
2601
Agreed
points
1964
Points of
difference
637
Thought provoking comments
1261
Prominent Sessions
Explore sessions that stand out as leaders in specific categories. Click on links to visit full session report pages.
Longest session
3
Session with most speakers
28
Session with most words
26268 words
Fastest speakers
1
Kenneth Harry Msiska
292.90 words/minute
2
Anshul Sonak
226.68 words/minute
3
Smriti Parsheera
215.19 words/minute
Most Used Prefixes and Descriptors
digital
6874 mentions
during Internet Governance Forum 2025
The session that most mentioned the prefix digital:
ai
6056 mentions
during Internet Governance Forum 2025
The session that most mentioned the prefix ai:
internet
4409 mentions
during Internet Governance Forum 2025
The session that most mentioned the prefix internet:
Policy Network on Internet Fragmentation (PNIF) (121 mentions)
online
2371 mentions
during Internet Governance Forum 2025
The session that most mentioned the prefix online:
Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online (83 mentions)
development
1733 mentions
during Internet Governance Forum 2025
The session that most mentioned the prefix development:
WS #231 Address Digital Funding Gaps in the Developing World (51 mentions)
Questions & Answers
How do we strengthen the resilience of critical internet infrastructures such as submarine cables?
Strengthening the Resilience of Critical Internet Infrastructure: Submarine Cables
The discussions across multiple sessions at IGF 2025 revealed a comprehensive approach to strengthening submarine cable resilience, encompassing technological solutions, governance frameworks, and international cooperation mechanisms.
Advanced Monitoring and Detection Technologies
A key technological advancement discussed was distributed acoustic sensing (DAS) technology. Steiner Bjornstad from Tampnet demonstrated this capability across multiple sessions, explaining how “cables can be used as sensors to detect if anything impacts on the cable, or even if anything is approaching the cable.” He illustrated the practical application: “This shows how we can protect our cable. You see the red dots up there, and this line. This line is the fiber cable, and the red dots is a trawler approaching the cable.”
The technology’s relevance was underscored by recent incidents, as Bjornstad noted: “We have recently seen fiber cuts in the Baltics caused by anchors being dragged over the cables. And by using this type of system, you can actually see any object approaching the cable, so that you can be able to stop it.”
Building Redundancy and Network Diversity
Industry leaders emphasized the critical importance of network redundancy. In the Securing Basic Internet Infrastructure session, Erica Moret from Microsoft explained their approach: “We’ve built extensive redundancies into the network, deploying multiple cable pathways and diverse landing points so that if one link is lost, traffic can be rerouted to other alternate routes.”
The discussion in WS #139 Internet Resilience highlighted the importance of technology diversity, with Manal Ismail suggesting governments should “promote conscious investment in resilient networks with built-in redundancy and also benefiting from technology’s diversity, like satellite versus land or undersea cables.”
Public-Private Partnerships and International Cooperation
The discussions emphasized the critical role of collaboration between governments and private sector operators. In the dedicated Protection of Subsea Communication Cables session, Kent Bressie highlighted that “more than anything else, we need more and better awareness and communication between and among submarine cable operators, other marine industries, and governments at the national, regional, and multilateral levels.”
Johannes Theiss from the EU outlined their comprehensive approach, mentioning their cable security action plan covering “prevention, over-detection, response and recovery up to deterrence” with substantial funding through programs like the Connecting Europe facility.
Early Warning Systems and Predictive Analytics
Advanced monitoring capabilities were highlighted as essential for resilience. Erica Moret described Microsoft’s approach: “We continuously monitor all network links for performance degradation or outages using advanced telemetry tools. And this early warning system combined with predictive analytics allows near instant rerouting of internet traffic if a cable fails.”
Addressing Infrastructure Concentration Risks
The concentration of submarine cable ownership emerged as a significant concern. In Day 0 Event #252, Chris Disspain highlighted that “undersea cables are the physical backbone to the global internet and that includes news delivery. A lot of people who aren’t involved in this area think that the undersea cables are all owned by governments, and indeed some of them are, but a lot of them are owned by a small number of corporations”, creating potential vulnerabilities.
Legal and Regulatory Frameworks
The need for stronger international legal protections was discussed in Open Forum #10, where Lukasz Kulaga noted that “existing international legal regimes do not adequately protect subsea data cables from international damage, nor do they effectively hold perpetrators of such a damage accountable.”
Supporting Ecosystem Development
Beyond physical infrastructure, the discussions highlighted the need for comprehensive ecosystem development. In WS #231, Raj Singh emphasized that while “There’s a lot of submarine cables being deployed all across the world. The problem is, it’s the cables that are being deployed. There’s no supporting ecosystem that’s being set up at the same time.”
Preparedness and Risk Management
The importance of proactive planning was emphasized throughout the discussions. Giacomo Persi Paoli summarized this principle: “resiliency cannot be improvised. It has to be made by design through careful planning.” Olaf Kolkman reinforced this view, stating that “resiliency is about readiness, is about preparedness, is about understanding what can go wrong.”
The discussions revealed that strengthening submarine cable resilience requires a multi-faceted approach combining advanced technologies, diverse network architectures, robust international cooperation frameworks, and comprehensive risk management strategies.
Are low-earth orbit satellite constellations a reliable way to ensure internet connectivity? Are we seeing a race in outer space regarding the deployment of such constellations? What are the benefits and challenges of such connectivity?
Low-Earth Orbit Satellite Constellations: Connectivity, Competition, and Challenges
Low-earth orbit (LEO) satellite constellations are emerging as a significant solution for global internet connectivity, though they present both opportunities and concerns regarding reliability, market concentration, and governance.
The Space Race and Market Concentration
There is indeed evidence of a competitive race in outer space for LEO satellite deployment. Franz von Weizsaecker warns that “We are about to waste a huge global public good, which is the lower earth orbit and the medium earth orbit by the scramble for space that is happening, driven by a couple of companies that compete in allocating their satellites in this low and medium earth orbit.” This creates a system “where it’s not like an open ecosystem where anybody can engage with, but it falls into the hand of a few very powerful either private companies or governments.”
The competitive landscape includes major players like Starlink, which has moved “from 8 years of losses to a small profit currently, based in large part on a little over 4 million subscribers,” alongside competitors like OneWeb, Amazon’s Kuiper, and SES.
Benefits for Connectivity
LEO satellites offer significant advantages for bridging connectivity gaps, particularly in remote and underserved areas. Christopher Locke from Internet Society explains that “having satellites allows us to get connectivity to communities that otherwise would not be covered by Fiber platforms or not be covered by mobile platforms.” The impact is particularly notable in island states, where “in some cases, we’re seeing communities and islands where Starlink is becoming the largest ISP in the island.”
Looking toward the future, Henry Wang from Singapore IGF predicts transformative potential: “the low Earth orbital satellites is not only starting. There are so many constellations under construction. So within three to five years, all of them will put into operation. So space ground integration network will be the future of our community network.”
Challenges and Concerns
Economic Sustainability
Regulatory and Legal Challenges
Jurisdictional issues pose significant challenges. Joanna Kulesza observes that “we are witnessing an increase in legal tensions focused around low-Earth-orbit satellites and critical Internet infrastructures” with complications arising from “the jurisdictional attribution here is based on territoriality, whereas we see states trying to reach more of a functional arrangement.”
Regulatory conflicts are evident in cases like Bolivia, where Leon Cristian reports: “Starlink is operating in my country without permission… my government asked Starlink to have a complaints office in order to operate in Bolivia, but since Starlink is so powerful and such a big company, they said, I don’t need that, I don’t need to put an office in your country.”
Dependency and Control Issues
The ease of satellite connectivity paradoxically creates new vulnerabilities. Steve Song warns that “when you connect a Starlink dish you just turn it on and it connects to the internet and that’s it. Which is remarkable, but it also means that you don’t know what an IP address is, you don’t know what an autonomous system number is, you don’t know how to build the network and it is a loss of agency and control as a result.”
Political control represents another significant concern. Marwa Fatafta highlights how communities become “dependent on the whims of a one-tech billionaire who can, as in the case of Gaza… who said, I am not going to provide styling to the population in Gaza because the Israeli Minister of Communications tweeted, don’t you dare. And he also threatened to withdraw a styling from Sudan, even though it wasn’t licensed then, and from Ukraine.”
Competitive Ecosystem Needs
Experts emphasize the importance of maintaining competitive options. Maarten Botterman cautions that “if you just do Starlink for low-orbit networks, it would be very difficult to have a competitive offer,” while acknowledging that competitive infrastructures including Starlink alongside traditional networks can improve accessibility and affordability.
LEO satellite constellations represent a promising but complex solution for global connectivity, offering unprecedented reach while raising concerns about market concentration, regulatory challenges, and technological dependency that require ongoing attention from policymakers and stakeholders.
How are IXPs helping bridge digital divides?
How IXPs Are Helping Bridge Digital Divides
Internet Exchange Points (IXPs) have emerged as crucial infrastructure for bridging digital divides, particularly in developing regions. Multiple sessions at the Internet Governance Forum highlighted their transformative impact on connectivity, costs, and local internet resilience.
Dramatic Growth and Impact in Africa
Several speakers emphasized the remarkable expansion of IXPs in Africa. Kurtis Lindqvist noted that “the first ten years we saw this double, and the last two years, two, three years, we’ve seen an explosion of this in Africa”, helping to “bring down costs, increase resilience, increase the robustness of the network”.
Emily Taylor from Main Session 3 highlighted the direct impact, stating that “the emergence of internet exchange points in Africa… has a really concrete impact in reducing latency, reducing costs, and improving speed of connectivity within Africa”.
Keeping Local Traffic Local
A key benefit of IXPs is their ability to localize internet traffic, dramatically improving efficiency and reducing costs. The Newcomers Orientation Session provided a compelling example from East Africa, where Chengetai Masango explained that before the IXP, “if I wanted to send a message to my next-door neighbor, it would go overseas to Europe, and then come back”, increasing costs. The establishment of an IXP meant traffic “stayed within the area, so the cost of that connection went down, Internet was cheaper”.
From WS #139, Manal Ismail emphasized that governments should “encourage keeping local traffic local through IXPs”.
Reducing Dependence on International Links
IXPs provide critical resilience for countries with limited international connectivity options. In the Open Forum on Lesotho’s Digital Transformation, Kanono Ramasamule explained that as a landlocked country, they “have set up an internet exchange point within the country to ensure that at least local traffic can stay local even if we have problems”.
Danny Wazen from UNDP in Day 0 Event #161 highlighted their work to “promote… the IXPs and the role of this internet exchange points in order to provide less or lower cost, but also reduce the dependence on international links”.
IGF’s Role in IXP Development
The Internet Governance Forum has played a significant role in IXP proliferation. In the IGF Retrospective, Markus Kummer shared a specific example of someone from a Pacific island who said, “thanks to the IGF, we now have an IXP. I learned at an IGF meeting how to set up an IXP”. He also noted that “people knew it was important, but they didn’t know that it is not rocket science to set up an Internet exchange point”.
Regional Perspectives
In Latin America and the Caribbean, Gabriel Adonailo from LaQX explained in the IGF LAC Space that “los puntos de intercambio de tráfico básicamente lo que hacen es mejorar la calidad de servicio de internet a todos los usuarios e intentar mantener o minimizar el costo de acceso a internet” (traffic exchange points basically improve internet service quality for all users and try to maintain or minimize internet access costs).
The discussions across multiple sessions demonstrate that IXPs are not just technical infrastructure but critical tools for economic development, digital inclusion, and internet sovereignty in developing regions.
Can new generic top-level domains (gTLDs) and the concept of ‘universal acceptance’ make the internet more inclusive? How?
No relevant discussions found.
To what extent does the current approach to promoting certain digital public infrastructure initiatives risk creating new forms of digital colonialism?
Digital Colonialism Risks in Digital Public Infrastructure Initiatives
The discussions across multiple sessions at the Internet Governance Forum 2025 revealed significant concerns about how current approaches to promoting digital public infrastructure (DPI) initiatives risk creating new forms of digital colonialism, particularly affecting countries in the Global South.
Structural Power Imbalances
A central theme emerged around the unequal power dynamics between the Global North and Global South in digital governance. Sabhanaz Rashid Diya highlighted this tension, noting the existence of “norm shapers versus norm takers” where “you have the Global North, who are shaping the norms” while there are “so few Global South representatives, whether it’s governments, whether it’s civil society, whether it’s the private sector.”
This imbalance manifests in what she described as “this culture of, you know, imposition that has also come over the years, where, you know, some of the norm shapers are kind of also deciding what gets done in some of the Global South countries.”
Economic and Data Extractivism
Multiple speakers addressed concerns about extractive economic models embedded in digital systems. Anita Gurumurthy emphasized that these constraints “are manufactured by an extractivist economics” that is “historical and it’s also baked into the digital paradigm.”
In the context of language data and AI, Deshni Govender noted that “extractive practices often happen within countries and within the continent under the guise of the open collaboration concept.” This was further illustrated by concerns about “digital colonization” where “the data will be owned by the centralized company” and “the data will run out of your countries, and your people and the country don’t own the data.”
AI and Technological Dependencies
The discussion on AI revealed particularly concerning patterns of dependency. Ashana Kalemera warned that “The risk of AI-driven digital neocolonialism is growing. As global powers compete for technological influence, Africa must strengthen its position to ensure AI serves local needs rather than external interests.”
Lacina Kone highlighted the current dependency: “currently in Africa, on our continent, there are over a thousand startup who are downloading on a daily basis, the API from Open Frontier, Open AI Frontier model, as well as the DeepSeek, which is the Chinese.” He emphasized that “Those models are not located on our continent. They’re located outside the continent.”
Cultural and Epistemic Concerns
The discussions revealed deep concerns about cultural homogenization and epistemic injustices. Anita Gurumurthy noted that “what we ignore is that there is a western cultural homogenization and these AI platforms amplify epistemic injustices.” She further explained that “we are certainly doing more than excluding non-English speakers. We are changing the way in which we look at the world. We are erasing cultural histories and ways of thinking.”
Indigenous voices particularly emphasized these concerns. Aili Keskitalo stated: “AI is not neutral. It can replicate colonial logics if we are not involved from the beginning, as rights holders, not just users.”
Digital Plantations and Economic Parallels
Parliamentary discussions drew explicit parallels between digital colonialism and historical patterns of exploitation. John K.J. Kiarie from Kenya warned: “what will happen with this AI is that my people will be condemned to digital plantations, just like they were condemned with sugar cane and with coffee and with all these other things that happened in slave trade.”
DPI Implementation Concerns
Specific concerns were raised about how DPI initiatives are implemented. Vivian Affoah noted that “sometimes we find that a lot of these initiatives are as a result of, let’s say, pushed by the donor community, or the World Bank brings funding to the government, you need to get ID cards or ID system, national ID for your people. So they start implementation right away without consulting.”
Responses and Alternatives
Some sessions explored potential responses to these concerns. <a href="https://dig.watch/event/internet-governance-forum-2025/high-level-session-5-charting-the-path-forward-for-the-wsis20-review-and-role-of-the-igf?diplo-deep-link-text=We%20are%20setting%20out%20what%20we%20call%20the%20EU%20offer%2C%20a%20set%20of%20tools%20that%20will%20allow%2C%20through%20very%20much%20the%20use%20of%20open%20source%2C%20those%20partner
Could the push for digital identity systems exacerbate existing forms of discrimination and exclusion?
The question of whether digital identity systems could exacerbate discrimination and exclusion was extensively discussed across multiple IGF 2025 sessions, with speakers providing concrete examples of how these systems can both include and exclude vulnerable populations.
Several sessions highlighted real-world examples of exclusion. In Networking Session #37, Susan Mwape provided specific cases: “I can think of a number of countries that are going through a lot of challenges where we’ve seen civil society actively engage in these processes. I think recently we had that case, a landmark case, where the Kenyan civil society sued the government on the Hodu Manamba, which was excluding a lot of communities.” Vivian Affoah in the same session noted practical exclusion issues: “For example, in Ghana, Nigeria, there are issues about the national identity system. People register for ID cards, they don’t receive them. And these ID cards are tied to so many things, opening of bank accounts, getting SIM cards, and other services”
The scale of digital exclusion was starkly illustrated in WS #98, where Keith Breckenridge described South Africa’s experience: “There was a court case last year in January where the Department of Home Affairs admitted that two million identity numbers had been turned off… that disables the identity number from functioning. It means you cannot access the birth of your child. Can’t bury your parents on access to bank accounts. You cannot function digitally.”
Healthcare discrimination was specifically highlighted in WS #257, where an online participant shared findings about India’s healthcare DPI: “people with chronic diseases, like leprosy, or people with disabilities, struggle with Aadhaar enrollment or authentication, leading to exclusion of those in need of healthcare services” and that “patients of diseases with social stigma, such as HIV, AIDS, exclude themselves from using ABDM”
However, speakers also emphasized the importance of digital identity for inclusion. In WS #290, Dr. Jimson Olufuye argued that “identity is critical to closing the digital divide, because if you cannot identify anybody, it means the person does not really exist. And we’re talking about inclusivity.”
The broader implications for personhood were noted in the New Technologies and Human Rights session, where Anita Gurumurthy observed how “digital ID programs, deployment of facial recognition technologies in crime or social credit scoring for pensions. All of these examples show how our personhood is redefined by the manner in which tech renders who we are.”
Several speakers emphasized the critical role of civil society in addressing these challenges. In WS #257, Thomas Linder stressed that civil society organizations are needed to “represent different interests of groups and communities that would otherwise have been lost” in DPI implementation.
The discussions revealed a consensus that while digital identity systems have the potential to promote inclusion, their implementation carries significant risks of exacerbating existing discrimination and creating new forms of exclusion, particularly affecting marginalized communities, people with disabilities, and those with stigmatized conditions.
What are the risks of relying too heavily on public-private partnerships in developing digital infrastructure, particularly in terms of accountability and public interest?
Risks of Over-Relying on Public-Private Partnerships in Digital Infrastructure Development
The discussions across multiple IGF sessions revealed significant concerns about the risks of excessive dependence on public-private partnerships (PPPs) for digital infrastructure development, particularly regarding accountability deficits and threats to public interest.
Accountability and Transparency Challenges
Several sessions highlighted critical accountability gaps in current PPP models. In WS #225, Thobekile Matimbe emphasized systemic transparency failures: “from the 27 reported countries, we have less than four countries that really are transparent about those resources and what they’re doing with them.” Onica Makwakwa reinforced this concern, stating that “Accountability is just one of those things that I’m pained by. I feel like we’re just not seeing enough of that.”
The DPI pathways workshop illustrated how structural arrangements can undermine accountability. Luca Belli contrasted different models, noting that in India “if you want to file a freedom of access to information in India… they can reply to you, we are sorry, this is not a public organ, this is a foundation, so we are not bound by freedom of access to information” while Brazil’s model ensures “it is a public, it’s a truly public service… if I file a freedom of access to information request to the Brazilian Central Bank, they have to reply me.”
Market Concentration and Monopolistic Risks
The Data for Impact session provided detailed analysis of how PPPs can create monopolistic outcomes. Payal Malik explained that “the economics of multi-sided platforms where DPI’s are essentially the platforms connecting multiple actors… these inherent network effects of DPI’s can lead to winner-takes-all outcomes resulting in the creation of monopolies.” She warned that “application providers… are harvesting vast amounts of user data over time” and identified “a regulatory blind spot… because if this data collection, data usage by the private entities on these public platforms is not regulated, it may lead to creation of monopolistic enclosures and data hegemony in public-private partnerships.”
This concentration risk was illustrated in the DPI pathways discussion where Bidisha Chaudhury noted that “many of the vendors, small scale vendors that we interview, they don’t even think of UPI as a public infrastructure. They kind of synonymously think of it as a Google infrastructure” due to Google Pay’s market dominance.
Governance and Power Imbalances
The cloud autonomy session addressed fundamental governance challenges. Agustina Brizio emphasized the need to “rethink how public and private sector interact among each other” and stressed the importance of having “real enforcement mechanisms” rather than just gathering “multi-stakeholders in a table but then has only one or two stakeholders making the decision.” She identified “power imbalances that usually these companies have when they face government” in Latin America.
Threats to Media Independence and Public Interest
The media and big tech session highlighted infrastructure-related risks to press freedom. Chris Disspain warned about infrastructure owners becoming “instruments of the state or corporate power, wielding their control to advance political or economic interests, sometimes at the expense of press freedom and public interest.” The discussion emphasized that corporate decisions are “driven by profit, not by public interest.”
Investment Environment and Regulatory Challenges
The digital funding session explored the tension between private investment needs and public accountability. Franz von Weizsaecker acknowledged that “any private capital depends on the regulatory and the investment environment to be ready for that. And I cannot say that this is the case in all of the countries that we work with.”
Proposed Solutions and Safeguards
Despite acknowledging these risks, speakers proposed various safeguards. The Data for Impact session emphasized the need for “contractual arrangement or concession agreements between the private entity and the public infrastructure provider” to establish “fiduciary obligations on the private partners to uphold public interest and competitive neutrality.”
The discussions revealed that while public-private partnerships remain necessary for digital infrastructure development, current models often lack adequate safeguards to protect public interest and ensure accountability, requiring fundamental reforms to governance structures and regulatory frameworks.
Why do humans tend to be obsessed with building AI that matches human intelligence and has human attributes?
The question of why humans tend to be obsessed with building AI that matches human intelligence and has human attributes received limited discussion across the Internet Governance Forum sessions, with only two sessions touching on related themes.
In the Opening Ceremony, Monsignor Lucio Adrian Ruiz from the Holy See addressed the risks of anthropomorphizing AI rather than explaining the human obsession with creating human-like artificial intelligence. He stated: “We do not consider artificial intelligence to be a subject. It does not think, judge or feel. It is a product of human ingenuity and as such it must be accompanied by moral responsibility. Our intelligence is embodied, relational and moral. It is capable of compassion, truth and freedom. To confuse AI with human intelligence means reducing the human being to a set of calculations with the concrete risk of dehumanization.”
The Foundations of AI & Cloud Policy for Parliamentarians and Public Officials workshop addressed misconceptions about AI’s human-like qualities. Aleksi Paavola discussed myths perpetuated by popular media, explaining: “when we look at the movies related to AI, I think we often get this feeling that, OK, these AI systems, they are human-like. And in the movies, they think and feel like humans. But the reality is that AI systems, they are just statistical models. And they don’t have any feelings. And they don’t have any emotions.”
While these discussions highlighted the misconceptions about AI’s human-like capabilities and the risks of anthropomorphizing artificial intelligence, the sessions did not directly address the underlying psychological, cultural, or technological reasons why humans are drawn to creating AI with human attributes and intelligence.
In a world driven by economic growth and efficiency, can humans compete with machines? Should they? Is there space to advocate for a right to be humanly imperfect?
Human-Machine Competition and the Right to Human Imperfection
The discussions across various IGF sessions revealed a nuanced perspective on human-machine competition, generally advocating for human-AI collaboration rather than replacement, while acknowledging the legitimate concerns about efficiency-driven automation.
The Irreplaceable Nature of Human Judgment
Several sessions emphasized that certain human qualities cannot and should not be replaced by machines. In the autonomous weapons discussion, Gerald Folkvord from Amnesty International stated that “human rights are based on human dignity and the very idea that machines make autonomous life and death decisions about humans is a contradiction to human dignity.” Olga Cavalli reinforced this by emphasizing that “The important issue is that human judgment is irreplaceable. That has to always have to be in any training in decision involving lethal force.”
The judiciary session addressed similar concerns about human replacement in courts. While Judge Adel Maged questioned whether “AI replace the human element in courts? Because we are seeing now that AI is replacing many jobs,” Senator Dick Keplung provided a clear answer: “The topic is, can AI replace human element in courts? Answer is no, it cannot. The AI for me, improves the human lawyer in efficiency.”
Beyond Efficiency: Valuing Human Qualities
Several sessions challenged the purely efficiency-driven approach to AI adoption. In the AI equality discussion, David Reichel noted that while companies use AI primarily for efficiency, “efficiency as such is not enough reason to interfere with fundamental rights” and emphasized the need to use AI for making better and fairer decisions.
The parliamentarians’ workshop highlighted unique human strengths. Aleksi Paavola argued that “AI will replace human jobs. And what I think the reality is that AI, what it’s really doing is it’s freeing up the capacity of humans to do more interesting stuff” while emphasizing human strengths in emotional intelligence, creativity, ethical reasoning, and adaptability.
Advocating for Human-Centered AI
The AI therapist session provided perhaps the strongest advocacy for embracing human imperfection. Doris Magiri declared: “You are here to be fully human. And that is your greatest technology. As humans, we’re the first technology.” She emphasized that “AI should support, not replace human connection.”
Children’s perspectives provided additional insight in the children’s voices session, where Dr. Mhairi Aitken found that children overwhelmingly preferred traditional art materials, feeling that “art is actually real and children felt that they couldn’t say that about AI art because the computer did it, not them.”
Worker-Centric Approaches
The local AI policy session advocated for inclusive AI adoption, with Wai Sit Si Thou explaining that “if we really want to have an inclusive AI adoption that benefits everyone, we should focus on the right-hand side, on how AI can complement human neighbor and creating meaningful new jobs” rather than replacing human workers.
Concerns About De-skilling
The future of work session raised important concerns about human cognitive abilities. Ishita Barua warned about de-skilling, noting that “More and more people read summaries instead of full text. They write by prompting a language model with a few keywords and receive a polished paragraph in return. Where is the learning in that?” She advocated for “systems that sharpen our ability to think independently, reason critically, and build deep understanding.”
Conclusion
The discussions across IGF sessions suggest that while humans may not compete with machines on pure efficiency metrics, there is strong advocacy for preserving and valuing uniquely human qualities including judgment, creativity, emotional intelligence, and even imperfection. The consensus appears to favor human-AI collaboration that leverages the strengths of both, rather than replacement-oriented automation. As emphasized in the intelligent society session, the goal should be to “preserve human dignity in the face of rapid change” and ensure that “People must not and shall not become slavers of technology.”
Where are global AI governance initiatives heading towards?
Global AI Governance Initiatives: Directions and Trajectories
Global AI governance initiatives are heading toward multiple parallel frameworks with an emphasis on moving from principles to practice, though significant challenges around fragmentation and coordination remain.
Key Directional Trends
From Principles to Practice: A major theme emerging across discussions is the shift from establishing principles to implementing practical governance mechanisms. As noted in Open Forum #30, there is emphasis on “moving from principles to practice and that is undertaken through several initiatives”. The Main Session 2 panel consensus suggested “AI governance is moving toward more practical implementation of existing frameworks rather than creating new ones”.
Evolving Discourse: The governance narrative has evolved significantly. In the Book Launch session, Jovan Kurbalija traced this evolution: “Then we had AI safety aligned with the Bletchley conference. Again, 2003, AI magic. It’s going to kill humanity. We have to be careful. We need regulation.” However, he noted that subsequent conferences “calmed down, and Paris review in February, which basically removed completely Bletchley, this line of thinking”.
Major Global Frameworks and Initiatives
UN-Led Initiatives: The United Nations is establishing multiple mechanisms. In the Opening Ceremony, Antonio Guterres announced that “In New York, negotiations are underway to establish the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on AI Governance within the United Nations”. The Open Forum #82 detailed these developments, with Amandeep Singh Gill outlining “setting up an international independent scientific panel” and “a regular global dialogue on AI governance within the United Nations”.
Regional Approaches: The EU AI Act continues to influence global approaches through the “Brussels effect.” In Open Forum #75, Ambassador Noorman explained: “What the EU-AI Act can have is what we call the Brussels effect”. However, implementation challenges are emerging, with Main Session 2 noting the EU AI Act is “running into some problems” and there’s “now kind of a consensus amongst key policymakers and voices in the EU that maybe we went too far”.
Council of Europe Convention: The Open Forum #16 highlighted the Framework Convention on AI as “the first, as mentioned, legally binding global treaty on AI”, which is “global because it’s open for signatures, not just to the members of the Council of Europe”.
Challenges and Fragmentation
Regulatory Fragmentation: A major concern is the emergence of incompatible regulatory approaches. In Lightning Talk #65, Arne Byberg noted: “We see in the US, for instance, they have still no federal AI regulation”, while businesses are “starting to build up double and triple AI initiatives simply to cope with the various regulations of the different regions”.
Need for Interoperability: The Open Forum #30 emphasized avoiding fragmentation, with speakers noting “the challenge of running this technology and developing and deploying this technology that is global and doesn’t have borders, as we’re all familiar with, is the risk of the fragmentation of approach”.
Inclusive and Development-Oriented Approaches
Global South Participation: There’s growing emphasis on including developing countries in governance frameworks. The Open Forum #67 highlighted Africa’s advocacy, with Adil Suleiman stating: “we are also advocating for a seat for Africa when it comes to AI policy-making and AI governance”.
Development-Centered Approach: The Open Forum #48 revealed Brazil’s approach in the G20, “highlighting the need for AI governance that is development-oriented, to reduce inequalities”.
Emerging Governance Models
Multi-stakeholder Approaches: Despite some resistance, multi-stakeholder governance remains central. The WS #288 discussion revealed tensions, with some arguing “it’s a mistake to have a single governance structure that would probably get captured”, while others emphasized that “We’re never going to have a global regime. We’re never going to have a single global governance structure”.
Experimental Approaches: The WS #294 highlighted AI regulatory sandboxes as an emerging governance tool, with the European AI Office “writing an Implementing Act for AI regulatory sandboxes, and we are supporting the rollout of the sandboxes across Europe”.
Future Outlook
The trajectory suggests governance will continue evolving through multiple parallel frameworks rather than converging on a single global regime. The Main Session 2 noted that “
What next for the International Scientific Panel on AI?
The International Scientific Panel on AI was discussed across several sessions at the Internet Governance Forum 2025, with key updates on its development and implementation under the Global Digital Compact.
In the Opening Ceremony, UN Secretary-General Antonio Guterres confirmed that “negotiations are underway to establish the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on AI Governance within the United Nations.”
The most detailed updates came from the Implementation of the Global Digital Compact session, where Eugenio Garcia revealed that “they are discussing the third draft of the terms of reference and modalities for both the international independent scientific panel on AI and the global dialogue on AI governance” with “the proposal to have 40 members” and “a limitation of maximum two per nationality” to ensure geographic and gender balance.
During the Building Bridges for WSIS Plus session, Eugénie V. Garcia noted that “there is the third draft, which are the terms of reference and modalities for the AI scientific panel, the International Independent Scientific Panel on AI, but also the AI Global Dialogue” with an imminent deadline for member state feedback.
The panel’s purpose was elaborated in the Catalyzing Equitable AI Impact session, where Amandeep Singh Gill emphasized the need for “regular scientific assessments based on a global perspective, not the perspective of a region or a few companies, but a global scientific perspective” given AI’s rapid development and broad impact.
Regional initiatives were also highlighted, with Shikoh Gitau mentioning in the AI Readiness in Africa session the development of “an African scientific panel that is calling to young Africans, both in the continent and in diaspora, to enable and support their governments” in drafting AI frameworks.
The panel was also referenced as part of moving from AI principles to practice in the High Level Review of AI Governance session, where speakers emphasized its role in “continuing the global scientific conversation” and ensuring “the best scientific voices coming together.”
What next for the Global Dialogue on AI Governance?
What next for the Global Dialogue on AI Governance?
The Global Dialogue on AI Governance is currently under development within the United Nations framework as part of the Global Digital Compact implementation. According to UN Secretary-General Antonio Guterres in the Opening Ceremony, “In New York, negotiations are underway to establish the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on AI Governance within the United Nations.”
Ongoing negotiations are taking place to establish the modalities and terms of reference for the Global Dialogue. As discussed in Open Forum #48, Eugenio Garcia noted that “for the AI track which is very important for the GDC they are in at the UN discussing the third draft of the terms of reference and modalities for both the site international independent scientific panel on AI and the global dialogue on AI governance” and that these negotiations “are not easy”.
The purpose and structure of the Global Dialogue are becoming clearer through various discussions. Amandeep Singh Gill explained in Open Forum #82 that “Alongside that, we need a regular global dialogue on AI governance within the United Nations. So the summits are there. They are important moments for leaders to engage, but on a sustained basis, on an inclusive basis, we need that dialogue so we can learn from each other the experience of the EU with the AI Act, what’s working, what’s not working, China’s experience with inter-immersions on Gen-AI, other approaches need to see what works, what doesn’t work.”
Key principles are emerging for the Global Dialogue’s implementation. In Open Forum #30, Juha Heikkila emphasized that “And I think that in this regard also for the dialogue, the AI governance dialogue, we think it’s important that it doesn’t actually duplicate existing efforts, because there are quite a lot of them, and that’s why also in the GDC text it’s mentioned that it would be on the margins.”
Inclusivity and broader participation are central themes for the future development. As discussed in Main Session 2, Mlindi Mashologu highlighted South Africa’s G20 presidency work on “expanding, you know, the scope of the participation, you know, bringing more voices from the continent, from the global south, and also some of the underrepresented communities, in terms of the center of the AI governance dialogue.”
The AI for Good Global Summit has been recognized as a potential venue for the Global Dialogue. Thomas Lamanauskas mentioned in Open Forum #82 that “we’re very happy that in the AI Modalities Resolution discussions, the AI for Good Global Summit has recognized as at least a potential venue, hopefully for the first global dialogue on AI governance that was coming out of the Global Digital Compact.”
The Global Dialogue represents the UN’s convening power in bringing together diverse stakeholders. Melinda Claybaugh noted in Open Forum #30 that “And then the global dialogue on AI governance through UN forums. I think that is the convening power there is what’s really important in bringing the right stakeholders.”
Do we need a new UN agency specifically dedicated to AI?
The question of whether a new UN agency specifically dedicated to AI is needed received limited discussion across the Internet Governance Forum 2025 sessions, with only a few substantive mentions.
The most direct reference came from Jovan Kurbalija in Main Session 2, who reflected on initial reactions to AI development: “Almost three years ago, when ChargePT was released, it was magical technology… And at that time, you remember the reactions were, let’s regulate it, let’s ban it, let’s control it. There were knee-jerk reactions, let’s establish something analogous to the nuclear agency in Vienna for AI.” However, this was presented as a historical observation rather than a substantive discussion of the proposal’s merits.
A related but distinct approach was suggested in Open Forum #64, where Wai Sit Si Thou proposed: “And the last point that I want to highlight is on capacity building. We think that an AI-focused center and network model after the UN-CIMEC technology center and network could help in this regard to provide the necessary technology support and capacity building to developing countries.” This suggests a more distributed network model rather than a single centralized agency.
The closest endorsement of UN-level AI governance came from Phumzile van Damme in WS #438, who advocated for: “I think the ideal would be some form of international law around AI governance. And through that process, perhaps run through the UN, there is the ability to get a more inclusive process that includes the language, the ideals, the ethical systems of various countries.” She described this as her “utopia” for AI governance and called for “Binding international law on platform governance, AI platform governance.”
Overall, while the concept of UN-level AI governance mechanisms was touched upon, there was no comprehensive debate about the specific need for a new dedicated UN agency for AI during these sessions.
What is it about AI that we need to regulate?
What is it about AI that we need to regulate?
The discussions across the Internet Governance Forum 2025 sessions revealed multiple dimensions of AI that stakeholders believe require regulation, spanning from immediate harms to long-term risks, and from technical safeguards to fundamental rights protection.
Content Generation and Information Integrity
A primary concern across many sessions was AI’s capacity to generate misleading or harmful content. Lubna Jaffery highlighted how “The advancement of generative AI has only intensified this challenge. Today, disinformation can be produced and spread at an unprecedented scale and sophistication.” The threat of deepfakes was particularly emphasized, with Lindsay Gorman noting that “over a third of elections last year had these major deepfake campaigns associated with them.”
Sexual deepfakes emerged as a specific area requiring urgent regulation. Kenneth Leung revealed that “there were nearly 35,000 AI models available for public download on one platform service for generative AI, many of which are even marketed or with the intention to generate NCIIs, non-consensual intimate imagery.”
Transparency and Explainability
The lack of transparency in AI systems was identified as a fundamental issue requiring regulation. Abel Pires da Silva from Timor-Leste stated: “At the moment, we are fearing AI because it has been treated as a black box. We don’t know how it reaches its conclusions.”
The need for explainable AI was particularly emphasized for high-stakes decisions. Mlindi Mashologu emphasized “sufficient explainability, which is the requirement for the AI decisions, you know, it’s one of the areas that we’re advocating, especially, you know, those that impact, you know, human lives and livelihoods.”
Bias and Discrimination
AI systems’ tendency to perpetuate and amplify existing biases was identified as requiring immediate regulatory attention. Björn Berge provided concrete examples: “Algorithms screening out women for jobs. Chatbots that misread names and accents. Decisions made with no explanation or recourse. These are not isolated issues. They are systemic deficiencies, and they demand a systemic approach and response.”
Autonomous Weapons and Life-Death Decisions
The use of AI in autonomous weapons systems was consistently identified as requiring prohibition. Gerald Folkvord explained from a human rights perspective that “human rights are based on human dignity and the very idea that machines make autonomous life and death decisions about humans is a contradiction to human dignity.”
High-Risk Applications and Human Oversight
Speakers emphasized that AI applications in critical sectors require enhanced regulation. The EU AI Act focuses on “high-risk AI systems” including biometric identification, critical infrastructure, education, employment, law enforcement, and judicial systems.
Children’s Safety and Protection
AI’s impact on children was identified as requiring special regulatory attention. Leanda Barrington-Leach warned that “The acceleration of AI is now set to supercharge these risks and harms” referring to children’s safety online.
Surveillance and Privacy
AI-powered surveillance systems, particularly facial recognition technology, were highlighted as requiring strict regulation. Speakers detailed violations of multiple human rights including “right to dignity, privacy, freedom of expression, peaceful assembly and association, equality and non-discrimination, rights of people with disabilities, the presumption of innocence and the right to effective remedy and the right to fair trial and due process.”
Data Ownership and Economic Rights
The issue of data ownership emerged as a key regulatory concern. Joseph Gordon-Levitt emphasized “the basic principle that your digital stealth should belong to you. That the data that humans produce, our writings and our voices and our connections, our experiences, our ideas should belong to us.”
Environmental Impact
AI’s environmental footprint was identified as requiring transparency and regulation. Ana Valdivia highlighted that “84% of widely used large language models provide, no disclosure at all of their energy use or emissions. Without better reporting, we cannot assess the actual trade-offs and design, we cannot design informed policies and we can also hold AI and related infrastructures accountable.”
Risk-Based vs. Technology-Neutral Approaches
Several speakers emphasized that regulation should focus on uses and risks rather than the technology itself. Juha Heikkila explained the EU approach: “the AI Act does not regulate the technology in itself, it regulates certain uses of AI. So we have a risk-based approach and it only intervenes where it’s necessary.”
Governance Frameworks and Accountability
The discussions emphasized the need for comprehensive governance frameworks rather than piecemeal regulation. Virginia Dignam explained: “AI is not just a technology, it’s a social technical system, it’s a system of systems and one discipline alone is not sufficient to address it.”
The discussions revealed consensus that AI regulation must be multifaceted, addressing immediate harms while preparing for future challenges, balancing innovation with protection, and ensuring that governance frameworks are inclusive, transparent, and based on human rights principles.
What unintended consequences might arise from the rush to come up with new regulations for AI, and how can we proactively address them?
Unintended Consequences of Rushed AI Regulations and Proactive Solutions
Discussions across multiple sessions at the Internet Governance Forum 2025 revealed significant concerns about the unintended consequences of rapidly implementing AI regulations, with speakers highlighting various risks and proposing proactive measures to address them.
Key Unintended Consequences Identified
Innovation Stifling and Economic Impact
A primary concern raised across multiple sessions was that excessive regulation could stifle innovation and economic growth. Jennifer Bacchus from the US State Department warned in High Level Session 3 that “As policymakers, one of our biggest concerns relates to efforts to restrict AI’s development, which from our point of view could mean paralyzing one of the most promising technologies that we have seen in generations”, emphasizing that “Excessive regulation on the AI sector could kill a transformative industry before it can really take root”.
The economic burden of compliance was highlighted by Andrew Vennekotter in WS #257, who noted that “the compliance costs for most government imposed regulations are pretty high. In the EU, they’re about 40% of the total product’s value”.
Regulatory Overreach and Chilling Effects
Several speakers warned about the chilling effects of overregulation. Marjorie Buchser in Parliamentary Session 1 cautioned that “we see is a tendency to try to regulate every piece of content, every algorithm that exists, which is impossible but also has a very chilling effect on freedom of expression”.
Kojo Boakye from Meta expressed concerns about control mechanisms in Open Forum #67, stating “I do have concerns when people start speaking about control, because control, for me, negates in some ways or may, in some jurisdictions and with some governments, negate what I have a bias and believe in, which is the ingenuity, innate ingenuity and brilliance of young Africans”.
Regulatory Fragmentation and Complexity
The creation of complex regulatory patchworks was identified as another significant concern. Moritz von Knebel in WS #438 noted that “Different national approaches, but also international approaches like the one that the EU has taken, create a lot of complexity for compliance and regulation”, which “creates a regulatory patchwork that is difficult to navigate, but also creates room for regulatory arbitrage”.
Premature and Unmaintainable Policies
The challenge of creating technology policies that remain relevant was highlighted by Eltjo Poort in WS #288, who warned that “If policy becomes too detailed, then it becomes very hard to maintain, especially when it comes to technology. This is a technology that is evolving very quickly”.
Evidence of Regulatory Reconsideration
Some jurisdictions are already recognizing the need to reconsider their regulatory approaches. Melinda Claybaugh in Main Session 2 observed that “there’s now kind of a consensus amongst key policymakers and voices in the EU that maybe we went too far and actually we don’t know whether this is really tied to the state of the science”.
Proactive Solutions and Approaches
Regulatory Sandboxes and Experimentation
Several speakers proposed regulatory sandboxes as a solution. In WS #294, Natalie Cohen discussed how sandboxes help “manage the tensions that can be created between regulation and innovation”, while Alex Moltzau emphasized their value for “regulatory learning”.
Flexible and Adaptive Frameworks
Jayantha Fernando from Sri Lanka emphasized the importance of “flexible governance tools” in WS #283, using “soft law primarily as an approach being what is perhaps best for Sri Lanka”.
Evidence-Based and Tailored Regulation
Alexandra Walden in New Technologies and the Impact on Human Rights advocated for “tailored regulation”, arguing that “broad vague regulation at the outset I think is both harmful to innovation in this area and really I think doesn’t help us have a narrowly tailored solution to the problem”.
Multistakeholder Approaches
Rajendra Pratap Gupta in the Dynamic Coalition Collaborative Session proposed “a multi-stakeholder ethical framework for regulating AI” as a solution to over-regulation, noting that “over regulation will kill innovation”.
Clear Guidance and Risk Assessments
Interestingly, some speakers noted that clear regulatory guidance can actually accelerate innovation. Eltjo Poort in WS #288 observed that “organizations that have clear guidance and they speed up innovation in AI terms because they don’t have to look over their shoulder all the time”.
Balancing Innovation and Safety
The discussions revealed a complex challenge of balancing innovation with safety. As Wolfgang Klauweiter noted in Open Forum #33, <a href="https://dig.watch/event/internet-governance-forum-2025/open-forum-33-building-an-international-ai-cooperation-ecosystem-2?diplo-deep-link-text=you%20have%20a%20dilemma%2C%20while%20on%20the%20one%20hand%20you%20want%20to%20stimulate%20innovation%2C%20you%20
Could the push for global AI governance standards inadvertently stifle innovation in developing countries?
The concern that global AI governance standards could inadvertently stifle innovation in developing countries was extensively discussed across multiple sessions at IGF 2025, with speakers highlighting both the risks of overly restrictive governance and the need for context-appropriate approaches.
Core Concerns About Innovation Stifling
Several sessions emphasized how restrictive governance could limit developing countries’ AI potential. In Open Forum #67, Kojo Boakye warned that control measures should be approached with “caution if we want to grab the full opportunity that this presents” regarding open source AI’s potential for Africa.
In Open Forum #30, Abhishek Singh highlighted how “Currently, the state of the technology is such that the real power of AI is concentrated in a few companies in a few countries” and discussed practical barriers like ensuring access to sufficient GPU resources for countries like India.
Need for Context-Aware Governance
Multiple sessions emphasized the importance of avoiding one-size-fits-all approaches. In Main Session 2, Mlindi Mashologu discussed the need for “context-aware regulatory innovation” and “adaptive policy tools that can be calibrated to center-specific risks and benefits.”
In WS #438, Alexandra Krastins Lopes noted that “Many global frameworks are still shaped primarily by Nord countries’ perspectives, with assumptions about infrastructure, economic and regulatory capacity, and also risk tolerance that do not necessarily reflect the realities of the global majority.”
Regulatory Innovation Approaches
Several countries shared their approaches to balancing innovation with governance. In WS #283, Jayantha Fernando from Sri Lanka emphasized that “In Sri Lanka, we don’t want legal regulatory steps to be an impediment towards innovation.”
Open Forum #68 featured Ambassador Ndemo sharing Kenya’s success with mobile money, where they decided to “we don’t have the regulations, but let’s move forward and see what happens” – an approach that led to one of the most inclusive innovations.
Structural and Economic Concerns
In the New Technologies and Human Rights session, Anita Gurumurthy criticized how “sustained lobbying by the big actors” creates imbalances and argued that constraints faced by developing countries “are manufactured by an extractivist economics.”
Launch of the Global CyberPeace Index session highlighted structural inequalities, with Marlena Wisniak noting that “the digitalization often relies on the global north. Disproportionate data centers are based in Europe and in the U.S. and in China.”
Solutions and Flexible Approaches
Several sessions highlighted innovative solutions. WS #294 discussed how AI sandboxes can support responsible innovation, with speakers noting that African regulators “have really embraced the idea of experimentation when it comes to regulating these emerging technologies like AI.”
The OECD Trustworthy AI session emphasized context-appropriate approaches, with Anne Rachel advocating for “taking the time to do things, because rushing into doing things that are not geared to the context just keeps us behind more than anything.”
The discussions consistently emphasized the need for governance frameworks that enable rather than constrain innovation in developing countries, while ensuring appropriate safeguards and meaningful participation in global AI governance processes.
What are the implications of treating algorithms as ‘black boxes’ beyond human comprehension? How might this opacity erode public trust in AI?
No relevant discussions found.
How can we address the potential conflict between calls for data minimisation and the data-hungry nature of AI development?
The conflict between data minimisation principles and AI’s data-hungry nature was identified as a significant challenge across several IGF 2025 sessions, with speakers highlighting both the problems and potential solutions.
The Core Problem
Roxana Radu from WS #187 Bridging Internet AI Governance From Theory to Practice clearly articulated the fundamental tension: “there is the hard-learned lesson of personal data collection, use, and misuse. We have more than 40 years of experience with that in the internet governance space, and we’ve placed emphasis on data minimization, to not collect more than what you need. This lesson does not seem to apply to AI, in fact it’s the opposite. Collect data even if you are not sure about its purpose currently, machines might figure out a way to use that data in the future, is the opposite of what we’ve been practicing in recent years in internet governance.”
Specific Manifestations of the Problem
The conflict manifests particularly acutely in sensitive domains. Luciana Benotti in WS #283 AI Agents: Ensuring Responsible Deployment highlighted how “since there’s a lack of enough data for training systems, especially in domains that are sensitive, like healthcare, is… tempting, let’s say, to use information that is not properly anonymized for training purpose.” She also warned that “large language models, when they use private information for training, they can repeat that information at any point.”
For indigenous communities, the challenge is even more acute. In Open Forum #73 Indigenous Peoples Languages in a Digital Age, Valts Ernstreits explained: “AI technologies still rely heavily on large volumes of text data… in order to use a language in whatever technology, we need large amounts of data. And that data, and especially for small communities, it’s always hypersensitive. So, you don’t even need sensitive data to actually collide with GDPR.” Lars Ailo Bongo added that “the indigenous people and other minorities are considered data botanists, considered a special category. So this requires extra strong data protection.”
Proposed Solutions
Several speakers offered approaches to address this conflict. Hannah Taieb from WS #152 a Competition Rights Approach to Digital Markets challenged the assumption that extensive personal data is necessary, arguing that “doing personalization as it’s done on social media…It’s absolutely possible to do it while respecting GDPR and still having a very good user experience. The fact that…many big tech and big social media platform have integrated the fact that you need to use…any sensitive data meaning like…gender age whatever any demographic in order to have like a…a good user experience it’s actually not true it’s it relies on that for advertising perspective but not for user experience.”
For digital public infrastructure, Payal Malik in WS #257 Data for Impact Equitable Sustainable DPI Data Governance proposed regulatory solutions, suggesting “there should be some kind of a contractual arrangement or concession agreements between the private entity and the public infrastructure provider to provide for open access, but also put limits on the kind of data which could be collected, the minimization of data collection, et cetera.”
In Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action, Dr. Erika Moret emphasized that companies “must implement privacy by design, minimizing data collection, securing data storage and ensuring compliance with privacy laws.”
The discussions reveal that while the conflict between data minimisation and AI development is real and challenging, solutions exist through better governance frameworks, contractual arrangements, privacy-by-design approaches, and questioning assumptions about the necessity of extensive personal data collection for effective AI systems.
How can we address the potential conflict between calls for algorithmic transparency and the protection of trade secrets?
The conflict between algorithmic transparency and trade secret protection emerged as a significant challenge across multiple sessions at IGF 2025, with speakers proposing various approaches to balance these competing interests.
Industry Perspectives on Balancing Transparency and Commercial Interests
Technology companies acknowledged the tension while defending their current approaches. In the AI policy workshop, Google’s representative explained their compromise approach: “we are disclosing as much as we can while protecting our commercial interests, because of course model is a commercial interest.” This sentiment was echoed in discussions about open-source AI, where speakers recognized that “there is no binary open and closed” in AI development.
Legal and Regulatory Challenges
The tension became particularly evident in law enforcement contexts. In the facial recognition session, speakers described how “The government’s position is that this information is a trade secret belonging to the company that provides the software.” However, courts have begun pushing back, with one ruling that while commercial sensitivity was “understandable, it wasn’t good enough” to avoid transparency requirements.
Proposed Solutions and Alternative Approaches
Several innovative solutions emerged from the discussions. In the AI governance session, one speaker proposed requiring “all AI developers to implement robust safety guardrails and have these guardrails open source rather than the models themselves.” This approach would provide transparency on safety measures without requiring full algorithmic disclosure.
Developing Country Perspectives
Speakers from developing nations advocated for stronger transparency requirements. In the local AI policy session, it was argued that “transnational companies use the excuse of trade secrets to lock up data that otherwise should be available to public transportation authorities, public hospitals, etc.” The session suggested instituting “exceptions in IP laws for the sake of society” to ensure public access to essential data.
Industry Feasibility Concerns
Some industry representatives questioned the practical feasibility of full algorithmic transparency. In the AI security session, one speaker noted that companies typically view their algorithms as “their own IP and they will not be able to reveal it,” raising “a huge question mark about the feasibilities and possibility” of mandating algorithmic openness.
Support for Full Transparency
Despite industry concerns, some policymakers advocated for complete transparency. In the high-level session, Estonian representative Liisa Ly Pakosta dismissed business arguments against transparency, stating: “And all the arguments against it, like there are business purposes, etc., we wouldn’t agree. So we have a real experience how the full transparency helps for trust.”
The discussions revealed an ongoing tension between legitimate business interests in protecting intellectual property and the public need for algorithmic accountability, with various stakeholders proposing different approaches to balance these competing demands.
How do we reconcile the need for global AI governance with the vastly different cultural and ethical perspectives on AI across regions?
The question of reconciling global AI governance with diverse cultural and ethical perspectives emerged as a central theme across multiple sessions at IGF 2025, with participants acknowledging this as one of the most complex challenges in AI governance today.
Acknowledgment of Cultural and Regional Differences
Multiple speakers emphasized that different regions have fundamentally different approaches to AI ethics and governance. As Alexander Isavnin noted, “ethics and sustainability might be really different in different parts of the world”. Moritz von Knebel further highlighted that “different cultures, different countries see AI in a very different way”.
Avoiding One-Size-Fits-All Approaches
A consistent theme across sessions was the rejection of universal solutions. Mlindi Mashologu emphasized that “there is no one-size-fits-all when it comes to AI”. Thomas Linder reinforced this, stating that “It will never work to simply take one model, one cookie-cutter approach, and replicate it all across the world”.
The Concept of Interoperability
Rather than seeking uniform global standards, many speakers advocated for “interoperability” between different governance frameworks. Yoichi Iida from Japan explained that “different countries, different jurisdictions have different backgrounds, different social or economic conditions, so the approaches to AI governance have to be different from one from another, but still, that is why we need to pursue interoperability across different jurisdictions”. Anne Flanagan emphasized “policy interoperability”, noting that “what harms mean can look different in different regions, different cultural contexts”.
Regional Approaches and Local Solutions
Several sessions highlighted the importance of developing locally relevant AI solutions. Adil Suleiman emphasized the African perspective, stating “there is also a cultural and tradition aspect of it, because the culture in the West is not the same as the culture in Africa”. Kenneth Pugh from Chile noted that “artificial intelligence, which is now trained with material learning, I hope for good, with models from Anglo-Saxon countries and data from them, they’re not resolving our problem”.
Multi-stakeholder and Inclusive Approaches
Many speakers emphasized the need for inclusive governance that brings diverse voices to the table. Marjorie Buchser stressed the importance of “multi-stakeholder that is contextual” and noted that “it’s very important to have multi-stakeholder perspective, but also from different local and regional level to input diversity in every stage of the AI life cycle”.
Language and Cultural Representation
The issue of linguistic and cultural bias in AI systems was extensively discussed. Minister Salima Bah from Sierra Leone raised concerns about “cultural erasure with algorithm recommendations because we understand that these algorithms are trained on data sets that potentially don’t reflect our diversity”. Anita Gurumurthy warned that “there is a western cultural homogenization and these AI platforms amplify epistemic injustices”.
Finding Common Ground
Despite acknowledging vast differences, speakers identified some common foundational principles. Yoichi Iida suggested the need for “common foundation, probably as human centricity and democratic values and including transparency or accountability or data protection”. Ms. Zhang Xiao emphasized that “human-centric is something we all recognize”.
Role of International Organizations
Wolfgang Klauweiter emphasized the UN’s unique role, stating “If it’s a global problem, we need all countries on the table, and only the United Nations offers this opportunity”. However, speakers also recognized the value of regional approaches, with Lacina Kone noting that “each country has a sovereignty you have to take into account. But we have to make sure that each size should fit together”.
Practical Solutions and Collaborative Models
Several concrete approaches were proposed. Ana Valdivia suggested that “rather than talk about digital sovereignty, that creates sort of like frictions between states, because all the states in the world want to become digital sovereign, we should talk about digital solidarity”. The OECD toolkit approach was presented as providing “practical resources for implementing, facilitating adoption across countries with a specific focus on emerging and developing economies but tailored to the diversity of needs”.
The discussions revealed a complex landscape where the need for global coordination must be balanced with respect for cultural sovereignty and local contexts. While no single solution emerged, the consensus pointed toward flexible, interoperable frameworks that allow for regional adaptation while maintaining core human-centric principles and fostering meaningful international cooperation.
What are the potential unintended consequences of the push for ‘ethical AI’ in perpetuating certain cultural or philosophical worldviews?
No relevant discussions found.
What concrete actions need to be taken to address the long-term societal implications of the increasing use of AI in judicial systems, immigration and border control, and government decision-making?
Concrete Actions to Address AI in Judicial Systems, Immigration and Government Decision-Making
The discussions across IGF 2025 sessions revealed several concrete actions being taken and recommended to address the long-term societal implications of AI use in critical government functions, though coverage varied significantly across the three domains.
Judicial Systems: Legislative Frameworks and Training
The most comprehensive discussion occurred in the Judiciary engagement session, where Justice Adel Maged emphasized the fundamental need for proper legislation: “what I’m saying is that all justice system to use AI, my belief, and this is what I am keeping demanding, we need legislation. We can’t leave it to judges or lawyers to present AI evidence without rules.”
Zimbabwe is taking proactive steps, as Justice Sylvia Chirawu noted: “Zimbabwe as a country is actually developing a policy framework on AI, so the judiciary could actually jump in and also develop its own policies that are specific to the judiciary, and also re-look at the laws that deal with evidence, with a view to amending them with respect to AI.”
UNESCO is implementing concrete training initiatives, with Tatevik Grigoryan highlighting their practical approach: “We have a global toolkit on AI and the rule of law which has already reached to 8,000 judicial practitioners” and “a massive online course which is available on our website.”
The Closing Ceremony reinforced the importance of judicial expertise, with Justice Vice President Adel Maged stating: “holding perpetrators accountable required well-crafted legislation implemented by judges equipped with sufficient technological expertise.”
Immigration and Border Control: Limited but Critical Insights
Malcolm Langford from Trust provided insights during Host Country Open Stage, revealing ongoing experimental work: they are ‘testing out different models of AI’ in immigration decisions and courts. He emphasized the need for ‘system thinking and governance’ and warned that ‘we don’t know really the risks of AI, so we need a lot of scaffolding around it. We need feedback systems to help us at early stages learn what is going wrong and what is going right.’
The Open Forum #17 AI Regulation Insights From Parliaments session acknowledged these areas as high-risk, with Axel Voss noting: “migration, asylum and border control management is considered in a majority in our house for an AI high-risk system and also administration of justice and democratic processes, also this one is considered as an AI high-risk system”
Government Decision-Making: Transparency and Human Rights Safeguards
Several sessions addressed broader government AI use. In Main Session 2, Mlindi Mashologu emphasized the need for ‘sufficient explainability’ in AI decisions affecting ‘credit scoring, you know, predictive policing, you know, healthcare diagnostics’ where ‘we need to have a right to understand how these decisions have actually come.’
The Netherlands shared their concrete response to AI failures in the Open Forum #75. Ernst Noorman described how “The use of strongly biased automated systems in welfare administration, designed to combat fraud, has led to one of our most painful domestic human rights failures.” Their response included “applying human rights impact assessments, by applying readiness assessment methodology for AI human rights with the UNESCO, and by launching a national algorithm registry, with now more than a thousand algorithms being registered.”
Research from Latin America presented in Global AI Governance: Reimagining IGF’s Role & Impact revealed widespread government AI use without adequate safeguards. Paloma Lara-Castro noted their investigation found AI usage “in sensitive areas of public administration, such as the areas of employment, social protection, public safety, education, and management of procedures, as well as usage in the administration of justice.”
Key Systemic Actions Identified
Malcolm Langford’s emphasis on ‘calibrate trust’ and ‘communicate back to citizens, users, judges and decision makers how accurate, how effective the AI predictions are, and what warnings need to be in place.’ represented a call for systematic transparency and accountability measures across all government AI applications.
How can synthetic data be leveraged to improve machine learning models while addressing concerns around data privacy, bias, and representativeness? What governance frameworks are needed to regulate the use of synthetic data?
Synthetic Data for Machine Learning: Privacy, Bias, and Governance
Based on the available meeting transcripts from the Internet Governance Forum 2025, the question of leveraging synthetic data to improve machine learning models while addressing concerns around data privacy, bias, and representativeness received minimal discussion across the sessions.
The only relevant mention was found in the Open Forum #82 on Catalyzing Equitable AI Impact, where Mariagrazia Squicciarini briefly touched upon synthetic content in the context of trust and data sets. She stated: “We know there was someone talking before about AGI, but for instance, let’s talk about synthetic content. Let’s talk about how to use it in a decent way for a good reason, for instance, to fix patchy data in order to have representative data sets.”
This brief comment suggests recognition of synthetic data’s potential to address representativeness issues in datasets, but no comprehensive discussion on governance frameworks, privacy concerns, or bias mitigation strategies was provided across any of the IGF 2025 sessions reviewed.
The limited attention given to this topic indicates that synthetic data governance remains an underexplored area in current internet governance discussions, despite its growing importance in AI development and deployment.
How can international law obligations be effectively translated into technical requirements for AI systems in military applications? And how can liability be determined when AI systems are involved in military actions that violate international law?
Translating International Law into Technical Requirements for Military AI Systems and Determining Liability
The question of how international law obligations can be effectively translated into technical requirements for AI systems in military applications, and how liability can be determined when AI systems violate international law, was discussed across several sessions at the Internet Governance Forum 2025, though comprehensive solutions remain elusive.
The Challenge of Accountability and Legal Agency
The most extensive discussion of liability issues occurred in the Open Forum on Regulation of Autonomous Weapon Systems, where Gerald Folkvord from Amnesty International highlighted the fundamental challenge: “Who do you hold responsible for a killer robot killing somebody in contradiction of international law?” He argued that “legal agency disappears” with autonomous systems and that “Once that disappears, once we say, let’s put that to the machines, machines are smarter than human beings, they will commit less mistakes than human beings, and not least, warfare by machines makes violations more invisible.”
Anja Kaspersen in the same session emphasized that “governance begins at the level of specification” and that accountability becomes difficult to trace without proper institutional decision-making frameworks.
Technical Implementation of International Humanitarian Law
The Workshop on Responsible AI in Security Governance provided the most concrete discussion on translating international law into technical requirements. Dr. Alexi Drew explained that “If we take IHL as part of that governance, a system should be designed, trained, tested, authenticated and verified with the data selected with its need to be compliant with IHL in mind. If it isn’t, that’s when you’re introducing the risks that something could be designed which is either completely incompatible with IHL or is open to being used in a manner which is non-compliant.”
Corporate Liability and International Criminal Law
The Day 0 Event on Tech Sector’s Role in Conflict addressed corporate liability specifically. Chantal Joris explained that international humanitarian law applies to individuals within companies when business activities have a nexus to armed conflict, stating “It starts applying in in times of armed conflict non-international armed conflict international armed conflict… it does apply to individuals that operate within a company When those business activities have an access to an armed conflict.” She noted that corporate executives could theoretically be liable under international criminal law through the ICC, though “under the very, very high thresholds that are under the Rome Statute.”
Human Control and Decision-Making Complexity
The complexity of maintaining meaningful human control was addressed in the Open Forum on Cyberdefense and AI in Developing Economies. Wolfgang Kleinwächter highlighted the practical challenges: “We cannot delegate the question of life or death to a machine. So I think the human control is in the center, but how you organize the human control, you know, if you get as a soldier, you know, a recommendation from a computer, and says, you know, now you have to push the button, you know, certainly there is human control, and there’s a human in between, but what you can do in a difficult situation.”
Ongoing Challenges and Lack of Clear Definitions
Several sessions noted the ongoing challenges in defining and regulating autonomous weapons systems. In Main Session 2, Jovan Kurbalija mentioned that “On military and AI, it’s obviously getting unfortunately central stage with the conflicts, especially Ukraine and Gaza, together with the question of use of drones. There are discussions in the UN with the laws, little autonomous weapons or robot killers, and then the Secretary General has been very, very vocal for the last five years to ban killer robots, which basically is about AI.”
Wolfgang Kleinwächter also noted that after 10 years of negotiations on lethal autonomous weapon systems, “we have no clear definition what it is.”
The discussions revealed that while there is broad agreement on the need for accountability and compliance with international law, significant challenges remain in translating these principles into concrete technical requirements and establishing clear liability frameworks for AI systems in military applications.
What is the difference between internet governance and digital governance?
The distinction between internet governance and digital governance emerged as a significant topic of discussion across multiple sessions at IGF 2025, with experts offering various perspectives on their relationship and scope.
Core Definitional Debates
In the Workshop on Revamping Decision-Making in Digital Governance, Jordan Carter highlighted a fundamental issue: “I think the thing that people casually mean when they say digital governance is what Tunis says internet governance is. But we know what internet governance is because it’s written down in Tunis. Digital governance is not defined.”
Arguments for Internet Governance as the Primary Framework
Several experts argued that internet governance remains the more precise and encompassing term. In the book launch session, Jovan Kurbalija explained his decision to maintain the focus on internet governance: “My arguments for Internet prevailed because most of the governance issues related to digital are related to the Internet… ultimately, it comes to us through TCP IP, through Internet protocol. Whether it is disinformation, whether it is e-commerce, whether it is content governance and everything else.”
This view was echoed in the WSIS Plus Multistakeholder Dialogue, where Juan Fernandez argued: “What’s digital without internet nowadays? What is artificial intelligence without internet or networks based on the internet protocol?”
Arguments for Digital Governance as a Broader Framework
Conversely, other experts advocated for digital governance as a more comprehensive approach. In the IGF LAC Space session, Renata Mielli explained the evolution: “the changes promoted by the Internet have allowed the emergence of new technologies with diverse impacts in our societies. For a long time now, governance spaces have stopped dealing exclusively with the Internet itself and have started to encompass everything that it entails.”
Technical vs. Broader Governance Distinctions
A key distinction emerged around technical versus broader governance issues. In Workshop 344 on WSIS+20 Technical Layer, Ajith Francis identified “a new sort of framing that’s emerging… between the governance of the internet, which is the actual standards, protocols, the namings and number system, but also governance on the internet, which is at the governance of the application layer.”
In Main Session 3, Hans Petter Holen articulated this distinction: “we need to strengthen internet governance, which shapes how we use it through shared norms and policies. And we need to guide digital governance, which shapes what it becomes in terms of social transformations.”
AI Governance and the Scope Question
The emergence of AI governance added complexity to this debate. In the AI Governance Open Forum, Juha Heikkila noted: “There is more to AI than what is on the Internet. Think of embedded AI, for example, robotics, intelligent robotics, autonomous vehicles, etc. So not all of AI is on the Internet.”
Evolution of the IGF’s Scope
In the IGF Retrospective session, Anriette Esterhuysen described how the forum evolved: “it managed to make the shift… from being a forum about the governance of the Internet, to being a forum about the governance of the use and evolution of the Internet.”
Academic Perspectives
In the GigaNet Academic Symposium, Jaqueline Pigatto raised fundamental questions about this distinction, asking whether discussions focus on “governance institutional approach to how we are dealing with these issues” or “substantive issues. So if you are talking about data protection, platform accountability, freedom of expression.”
The discussions revealed an ongoing tension between maintaining the precision of established internet governance frameworks while adapting to broader digital transformation challenges. While no consensus emerged, the debate highlighted the need for clearer definitions and frameworks as the digital landscape continues to evolve.
Are multilateral and multistakeholder approaches to internet and digital governance in opposition to each other? How to move away from this dichotomy and see the two as complementary, rather than competing?
Multilateral vs Multistakeholder Governance: Moving Beyond False Dichotomies
The question of whether multilateral and multistakeholder approaches to digital governance are inherently opposed was extensively discussed across multiple IGF 2025 sessions, with a clear consensus emerging that these approaches should be viewed as complementary rather than competing.
Rejecting False Dichotomies
Several speakers explicitly challenged the notion of opposition between these approaches. In Main Session 2, Jovan Kurbalija identified this as one of many “false dichotomies, including in the question of knowledge. I can list, you have multilateral versus multi-stakeholder, privacy versus security, freedom versus public interest… Ideally, we should have both, multi-stakeholder, multi-lateral security”.
This sentiment was reinforced in the WGIG+20 session, where Markus Kummer emphasized: “the Brazilians who always make the point there is no false dichotomy between the two. They have to work together”, while Jovan Kurbalija called it “the false dichotomy, multi-stakeholder versus multilateral”.
Complementary Rather Than Competing
Multiple sessions demonstrated how these approaches can work together effectively. Norway’s opening remarks in Welcome to Norway! illustrated this complementarity, with Minister Karianne Tung stating: “we believe it’s not enough with the multilateral processes, we need the multi-stakeholder model and platforms to meet as well”.
The Arab Region Digital Cooperation forum provided practical examples of this integration. Ahmad Farag noted: “what we are trying to do is to have this multilateral system supported by a multi-stakeholder approach. And I believe this will be a win-win situation for both governments and the community”.
Bridging Different Cultures of Decision-Making
The Parliamentary Session 4 offered insights into reconciling different approaches to governance. Raul Echeberria explained: “Those of us who come from the internet community are accustomed to what we could call maximizing the consensus opportunities… The traditional politics systems work in a different manner… based on the construction of majorities. That is a rule of democracy. So I think that we have to bring those two cultures close together”.
Practical Examples of Integration
Several sessions highlighted successful integration of both approaches. The WSIS Plus Multistakeholder Dialogue emphasized their complementary nature, with Min Jiang noting: “Many see GDC’s multilateral process and IGF’s multi-stakeholder process as complementary rather than antithetical to each other. So there’s no reason why the two cannot collaborate under the same UN umbrella”.
The GigaNet Academic Symposium provided theoretical framework for this integration. Pari Esfandiari argued: “I think they serve different functions. One is experimental, the other one is connective. We need both innovation from IGF and a structure from visas to build a coherent and inclusive governance future”.
Challenges and Adaptations Needed
Some sessions identified specific challenges in reconciling these approaches. The WSIS+20 session highlighted institutional adaptation needs, with Yu Ping Chan observing: “multistakeholderism is not natural to the UN system itself… perhaps New York needs to adapt to that”.
The IGF Model as a Bridge
The IGF itself was frequently cited as a successful example of bridging these approaches. In the NetMundial+10 book launch, Jennifer Chung observed: “UN itself is multilateral but the IGF is multistakeholder and that is the beauty of it, how we are able to work together in this kind of environment”.
The consensus across these sessions was clear: rather than viewing multilateral and multistakeholder approaches as competing paradigms, the digital governance community should focus on leveraging their complementary strengths to address global challenges more effectively. The key lies in adapting institutional frameworks, bridging different decision-making cultures, and maintaining spaces like the IGF that successfully integrate both approaches under a unified framework.
What expectations and wishes do we have from the upcoming WSIS+20 review process? UN member states will negotiate an outcome document to be approved in December 2025 at the General Assembly; what should this document include to ensure that we make more consistent progress towards the WSIS vision of an inclusive information/digital society?
Expectations and Wishes for the WSIS+20 Review Process
The WSIS+20 review process emerged as a central theme across multiple IGF 2025 sessions, with stakeholders expressing diverse expectations and wishes for the outcome document to be approved by the UN General Assembly in December 2025.
Preserving Foundational Vision While Addressing New Challenges
Multiple speakers emphasized the need to maintain WSIS’s foundational vision while adapting to technological advancements. Ekitela Lokaale noted in the High Level Session 5: “we are hearing from the stakeholders, for example, the need for us to ground the WSIS Plus 20 outcome review in the original WSIS vision, that is the Geneva Declaration Tunis Agenda and commitment… there is a clear recognition that even as we ground the review in the original WSIS vision and the Geneva and Tunis agenda or commitments, we should also reflect some of the technological advancements that have happened over the last two decades.”
Cuba’s position in the ITU Open Forum was more conservative, stating that “the outcome documents of both phases of WSIS remain fully valid and should be reaffirmed.”
Strengthening Multistakeholder Governance
The preservation and strengthening of multistakeholder governance emerged as a key priority across sessions. Byron Holland emphasized in the multistakeholder dialogue that “the WSIS Plus 20 reviewed outcome document should explicitly support the multi-stakeholder model of Internet governance.”
Minister Nithati Moorosi stressed the need for institutionalizing this approach: “The multi-stakeholder model must be institutionalized, not treated as a token gesture in intergovernmental settings. This includes ensuring meaningful participation of smaller states and underrepresented regions, investing in capacity building, and enabling transparency and accountability across all stakeholder categories.”
IGF Mandate and Sustainable Funding
A critical concern across sessions was securing the IGF’s future mandate and sustainable funding. Jarno Suruela highlighted in the Demystifying WSIS+20 session the need to “renew and strengthen the IGF mandate, including by ensuring a more sustainable financial basis from the regular UN budget.”
Byron Holland added that “CIRA strongly supports a long-term mandate for the global IGF, along with enhanced institutional resourcing.”
Digital Divide and Meaningful Connectivity
Addressing persistent digital divides remained a central concern. Thobekile Matimbe expressed frustration in the connectivity gap session: “I will highlight of course as we are discussing the world summit on information society and looking at how far we have come 20 years later. I think it is really concerning that where we are right now. We are at a place where 20 years later we are still discussing the digital divide and articulating similar gaps that we articulated as far back as 2002, 2003.”
Li Junhua noted progress but emphasized ongoing challenges in the Global South session: “In 2015, during the WSIS Plus 10 review, an estimated 4 billion people were offline. Today, allow me to reiterate, that number has dropped to 2.6 billion. This is a major improvement, but of course, still far too many remain unconnected.”
Human Rights and Digital Rights
Strong emphasis was placed on grounding WSIS+20 in human rights frameworks. The Global Digital Rights Coalition outlined three core priorities, including “1) to promote a human rights based as well as a people-centric, sustainable and development oriented approach to the WSIS review… ensuring that policy responses are grounded in international human rights law.”
Esteve Sanz from the European Commission emphasized in the digital rights session: “on the wishes plus 20 review this is I mean this is very important… what EU member states have have discussed and this is how we will go to the negotiations of the outcome document is to really take stock of the rise of digital authoritarianism… acknowledging the digital authoritarianism is on the rise and this has we has to be acknowledged and then based on that propose what we aim what we what we hope will be unprecedented language at the UN level in the wishes plus 20 resolution on digital human rights.”
Integration with Global Digital Compact
A major theme was avoiding duplication between WSIS+20 and the Global Digital Compact (GDC). Jorge Cancio emphasized in the multistakeholder cooperation session the need for “a joint implementation roadmap of the GDC integrated into WSIS.”
Maria Fernanda Garza stressed in the multistakeholder dialogue: “The WSIS Plus 20 process must not only think about inclusion, but operationalize it by integrating the implementation of the GDC as part of the WSIS Plus 20 outcomes.”
Flexibility of Action Lines
Several speakers argued that existing WSIS action lines were flexible enough to accommodate new technologies. Anita Gurumurthy stated in the building bridges session: “the existing action lines in WSIS Tunis are flexible enough to encompass new challenges. And rather than introducing new action lines or deleting existing ones, updates should be made to the current implementation architecture.”
Regional Perspectives and Inclusion
Regional voices emphasized the need for better representation. Ahmad Farag highlighted in the Arab region session: “To ensure that the Arab region priorities are meaningfully reflected in the outcome of the WSIS Plus 20 review process, I believe that we need to take some strategic steps… we need to unify the Arab voices through strategic coordination.”
Emerging Challenges
Technical community representatives expressed concerns about fragmentation. Joyce Chen warned in the <a href="https://dig.watch/event/internet-governance-forum-2025/ws-344-multistakeholder-perspectives-wsis20-the-technical-layer?diplo-deep-link-text=2025%2C%20this%20year%2C%20is%20an%20inflection%20point.%20
Given the upcoming WSIS+20 review process, where a renewal of the IGF mandate will be up for discussion, what does the IGF we want look like? What lessons have we learned from 19 years of the forum, and how can we build on them moving forward?
The IGF We Want: Vision for the Future Based on 19 Years of Experience
The WSIS+20 review process has sparked extensive discussions across IGF 2025 sessions about renewing and strengthening the Internet Governance Forum’s mandate. Multiple sessions revealed strong consensus around key priorities and lessons learned from nearly two decades of operation.
Call for Permanent Mandate
The most prominent theme across sessions was the call for establishing a permanent IGF mandate. As stated in the Closing Ceremony, “A permanent mandate would allow for deeper engagement, longer-term planning and more inclusive participation.” This sentiment was echoed throughout numerous sessions, with speakers emphasizing that permanent status would provide institutional stability and allow for more strategic long-term planning.
Minister Karianne Tung from Norway emphasized in the High Level Session 5: “For Norway, it is important that we are able to give IGF a permanent mandate. That is a key priority, so that it’s more predictable and more complementary on the existing structures that we already have.”
Sustainable Funding as Critical Priority
A crucial lesson learned is the need for stable, sustainable funding. The Action Oriented Solutions workshop emphasized that “ensuring the long-term financial sustainability of the global IGF and the wider IGF ecosystem is essential if we’re to fully realize both its purpose and its value.”
Germany’s position in the ITU WSIS+20 Open Forum highlighted “A top priority to secure a long-term and stable financial foundation for the IGF to ensure its full implementation of its mandate.”
Need for Institutional Strengthening
Multiple sessions identified the need to strengthen the IGF Secretariat and institutional capacity. The decision-making workshop called for “A stronger secretariat…with more resources” and better coordination capabilities.
The GDC Follow-up session specifically suggested to “strengthen the IGF secretariat in particular through appointment of a director position.”
Evolution from Talk Shop to Impact
A key lesson learned is the need to move beyond being perceived as merely a “talk shop.” The Dynamic Coalition session noted: “And that will be the main challenge that the IGF faces, where it has to move from a talk shop to an influencing position.”
The Academic Symposium reflected: “while the internet governance forum after 20 years remains a vital space for dialogue, how many times have we left the venue with rich ideas? That was many, many times. I can think of that, but it was very rare that we had a very clear path for the implementation of those ideas.”
Strengthening National and Regional Initiatives
The growth of the NRI ecosystem was recognized as a major success, with over 176 initiatives worldwide. The Global South forum emphasized: “The IGF has evolved into a global ecosystem with over 176 national, regional, sub-regional, and youth IGFs now active worldwide. These local and regional processes are not just complementary to our global efforts. They are foundational.”
Addressing Inclusivity and Participation Challenges
Several sessions identified ongoing challenges with inclusivity and meaningful participation. The IGF Retrospective session asked: “Do we take development and the realities of people that still live with digital exclusion seriously enough? Are we inclusive enough? Are we diverse enough?”
Streamlining and Focus
The need for better prioritization and focus emerged as a key lesson. The WSIS+20 Technical Layer workshop noted: “I would like to hear more proposals around how we can streamline IGF processes and intersessional work and how we can help to prioritize the work of the IGF and to give it more focus… But the issue that I want to point out is that we are very good at picking up things, but we don’t know how to put them down, you know, to make space for other pressing issues.”
Enhanced Integration with WSIS Processes
Better integration with broader WSIS mechanisms was identified as crucial. The Main Session 2 suggested: “I think the IGF, as WSIS plus 20 review happens, I think the IGF should be strengthened to really help with not only agenda setting of the action lines, but also as a feedback loop into CSTD, the WSIS forum, and other mechanisms where there is holistic sort of input from multi-stakeholders going into these processes.”
Preserving Multi-stakeholder Character
Throughout all discussions, there was unanimous emphasis on preserving the IGF’s unique multi-stakeholder character. The Opening Ceremony heard the EU confirm: “we as the European Union, we commit firmly to a rules-based global digital order, which is rooted in universal human rights, openness and the multi-stakeholder model of governance. So the EU remains a solid supporter of the Internet Governance Forum, and we strongly support its mandate beyond 2025.”
Looking Forward
The vision for the IGF’s future combines institutional strengthening with preservation of its core values. As the Leadership Panel called for “organizational evolution” while the Foresight session envisioned “a redesigned IGF, a redesigned and a braver IGF, redesigned in terms of making it much more participative and innovative, in terms of the methodologies we use for our sessions, and a braver IGF, more willing to actually ask difficult questions around which there’s not going to be consensus.”
The consensus emerging from IGF 2025 is clear: the community wants an IGF with permanent mandate, sustainable funding, strengthened institutional capacity, better integration with global processes, enhanced inclusivity, and preserved multi-stakeholder character – building on 19 years of success while addressing identified limitations.
What are the risks and challenges of having two parallel processes for the implementation, review, and follow-up of GDC and WSIS outcomes?
The risks and challenges of having two parallel processes for the implementation, review, and follow-up of GDC (Global Digital Compact) and WSIS (World Summit on the Information Society) outcomes were extensively discussed across multiple sessions at IGF 2025, with stakeholders expressing significant concerns about duplication, fragmentation, and resource strain.
Key Risks and Challenges Identified
Resource Strain and Participation Barriers
Multiple speakers highlighted how parallel processes create significant barriers to participation, particularly for developing countries and civil society organizations. Renata Mielli warned that “As new processes and discussion forums emerge in different agencies, or the agenda is multiplied about artificial intelligence, cybersecurity, data protection, NINES and many other issues, it becomes more difficult to guarantee the effective participation of the countries of the global south, of small countries, of civil society” during the IGF Lac Space session.
Tatjana Trupina emphasized that “Not only not duplicate the process, let’s not create alternative vehicles or alternative process to what we already have… Because very few stakeholders, if any, have resources to follow all these multiple complex processes” in the multistakeholder cooperation workshop.
Fragmentation and Incoherence
The risk of fragmentation was a central concern across sessions. Baroness Maggie Jones emphasized the need “to integrate not duplicate, to align not fragment” during the Closing Ceremony. She further noted in the WSIS+20 High Level Session that “we need to make sure that that doesn’t result in fragmentation and duplication, and that’s the real challenge”.
Institutional and Process Confusion
Several speakers expressed confusion about how to coordinate the two processes. Ekitela Lokaale noted in the WSIS+20 High Level Session that “the views are diverse. On the one hand, for example, there are those who feel that WSIS should remain the overarching framework and that all the other proposals in the GDC be implemented under the WSIS architecture. That’s one. Then there are also those who say, let the two processes don’t duplicate but follow what is in WSIS and let the processes under GDC run their course”.
Jason Pielemeier expressed concern that “the WSIS process essentially becomes transformed into GDC implementation rather than the WSIS being seen as a way to incorporate the objectives of the GDC into an existing process” in the multistakeholder cooperation workshop.
Duplication of Efforts and Baroque Processes
Jorge Cancio emphasized the need to avoid “baroque duplications and having parallel processes” due to UN budget constraints in the multistakeholder cooperation workshop.
Frodo Sorensen warned that “This implies that the global digital compact in practice largely is a duplication of this activity in the WSIS framework…There is a need to better connect WSIS and the GDC. Otherwise, we risk duplicated and fragmented efforts, which is unnecessary, since both initiatives have similar goals” in the technical layer workshop.
Government Perspectives on Process Challenges
Cuba’s position in the ITU WSIS+20 Open Forum highlighted developing country concerns: “we believe that no new mechanisms and processes should be created for the implementation and follow-up of the WSIS and the Global Digital Compact, because the increase in the amount of governance mechanisms and processes in the digital world make it difficult for many Member States, particularly developing countries, to participate”.
Proposed Solutions
Despite the challenges, several speakers proposed solutions. Maria Fernanda Garza suggested that “we can come together and think of a joint implementation roadmap to make use of the existing WSIS architecture” during the IGF Leadership Panel.
The consensus across sessions was clear: while both processes serve important purposes, careful coordination and integration are essential to avoid the significant risks of duplication, fragmentation, and exclusion of key stakeholders from meaningful participation in digital governance.
How can we ensure the GDC doesn’t become another set of well-intentioned but poorly implemented framework for digital cooperation?
No relevant discussions found.
Who needs to do what to ensure that the commitments and calls outlined in the Global Digital Compact have a meaningful and impactful reflection into local and regional realities? Are there lessons learnt from the implementation of WSIS action lines that could be put to good use?
Implementing the Global Digital Compact: Key Stakeholders and Lessons from WSIS
The discussions across multiple IGF 2025 sessions revealed a clear consensus on the need for coordinated, multi-stakeholder action to translate Global Digital Compact (GDC) commitments into meaningful local and regional impact, while leveraging two decades of WSIS implementation experience.
Key Stakeholders and Their Roles
National and Regional Internet Governance Initiatives (NRIs) emerged as critical implementation vehicles. Maria Fernanda Garza emphasized that “The NRIs can play a crucial role guiding implementation from the grassroots and enabling a bottom-up input… The NRIs are uniquely situated to engage with local governments for the purpose of implementing the GDC and the WSIS plus 20 outcomes.” Similarly, Concertina Tossa noted that “The NRIs could play a strategic role in the local implementation of the GDC. Their multi-stakeholder structure and their close connection to the local context make them particularly well-suited to monitor the implementation of GDC principles at local level”.
Governments face particular challenges in translating high-level commitments to practical action. Jimson Olufuye stressed that “we need to encourage countries to deepen this dialogue with their sustainable development goal offices, because we cannot be discussing at the top level and at the grassroots, nothing much is happening.” Minister Nthati Moorosi highlighted that “governments like Lesotho have a duty to model this inclusivity at home and advocate for it abroad”.
Civil Society and Non-Governmental Actors need enhanced capacity for meaningful participation. Thobekile Matimbe emphasized that “capacity building, raising awareness are some of the critical things that need to happen like as soon as yesterday for relevant entities like your national human rights institutions, your civil society actors.”
Integration with WSIS Architecture
A key consensus emerged around integrating GDC implementation within existing WSIS structures rather than creating parallel processes. Amandeep Singh Gill noted that “the Global Digital Compact adopted last year… member states were very conscious that they should not duplicate and they should actually build on the WSIS agenda… the reporting also on progress on the GDC is aligned with WSIS reporting.”
The need for a joint implementation roadmap was repeatedly emphasized. Jorge Cancio advocated for “a joint implementation roadmap of the GDC integrated into WSIS. And we are advocating also to update the existing WSIS architecture, which is different UN bodies and UN structures, to instill them with a multi-stakeholder approach”.
Lessons from WSIS Implementation
Several critical lessons emerged from 20 years of WSIS experience:
1. The Gap Between High-Level Commitments and Ground-Level Implementation: Thobekile Matimbe observed, “I think it is really concerning that where we are right now. We are at a place where 20 years later we are still discussing the digital divide and articulating similar gaps that we articulated as far back as 2002, 2003”.
2. Need for Better Coordination and Measurement: Fabrizia Benini proposed “actionable roadmaps that will track the implementation and the progress starting from the YSYS action lines, the SDG goals and the GDC commitments”.
3. Importance of Local Ownership and Context: Gitanjali Sah emphasized that “promoting local ownership and having like tailored local solutions capacity building program is really important for the success of any process like this.”
Practical Implementation Needs
Several concrete requirements emerged:
Capacity Building and Funding: Yu Ping Chan identified capacity building as “the number one ask from the national governments and communities that we serve”.
Focus on Implementation Over Negotiation: Anne Marie stressed that “we need to focus on how to deliver that actually and not just negotiate text.”
Accountability Mechanisms: Minister Nthati Moorosi called for “IGF and OASIS being platforms of accountability where countries come and account and say this is what I was supposed to do.”
The Role of the IGF
The Internet Governance Forum itself was identified as a crucial bridge between global commitments and local implementation. Tatjana Trupina noted “how the IGF and national and regional initiatives can be leveraged as a good vehicle for continuing the WSIS, for strengthening the WSIS implementation and the promise of the WSIS. But also being used as a vehicle for the GDC implementation within the WSIS process.”
The discussions revealed that successful GDC implementation requires coordinated action across all stakeholder groups, building on WSIS lessons learned, and establishing clear accountability mechanisms that connect global commitments to local realities through enhanced multi-stakeholder processes.
How can we address the tension between the drive for digital sovereignty and the need for a globally interoperable internet?
Addressing the Tension Between Digital Sovereignty and Global Internet Interoperability
The tension between digital sovereignty and global internet interoperability emerged as a central theme across multiple sessions at IGF 2025, with stakeholders offering various approaches to balance national autonomy with the need for a connected global internet.
Understanding the Core Tension
The fundamental challenge was articulated clearly in the Policy Network on Internet Fragmentation (PNIF) session, where Vinicius Fortuna from Google noted that digital sovereignty is “almost like incompatible with internet as a whole, because they don’t depend on other people anymore.” However, Marilia Maciel challenged this view, arguing that “the association between digital sovereignty and isolationism and fragmentation is not necessarily true” and that many requests for autonomy are legitimate responses to imbalances in the digital economy.
Distinguishing Sovereignty from Fragmentation
Several speakers emphasized the importance of distinguishing between legitimate sovereignty concerns and harmful fragmentation. In the PNIF session, Michel Lambert explained: “being sovereign over the Internet…They would like that our country depends less on other countries for its own infrastructure…So this is OK, and this is sovereignty. Now, fragmentation is totally political.”
Practical Solutions Through Technical Architecture
The Digital Identity workshop offered concrete technical solutions, with Dr. Jimson suggesting federated databases where “countries can keep their data locally. And then through API, you can share the specific data categories that have been agreed upon based on policy framework.” This approach allows for data sovereignty while maintaining interoperability.
Regional Cooperation and Harmonization
The data governance forum emphasized harmonization over homogenization. Olga Kyryliuk explained: “So as a lawyer I don’t see the mutual recognition of data protection frameworks as a threat to national sovereignty but it’s rather an issue of legal interoperability. So we often don’t need to create the identical laws but what we really need is to create the trustworthy equivalence and to create the trust which is cross-border trust.”
Digital Public Goods as a Bridge
The High Level Session on Digital Public Goods presented open-source solutions as a way to achieve both sovereignty and interoperability. EVP Henna Virkkune explained that DPGs aim to “offer secure, deployable digital solutions that can be adapted by users… enhancing trust, reducing costs, also avoiding vendor lock-in and enabling digital sovereignty while protecting privacy and security.”
Multistakeholder Cooperation as Essential
The workshop on multistakeholder cooperation emphasized that addressing this tension requires collaborative approaches. Tatjana Trupina noted: “This tension cannot be resolved just by saying, okay, let’s ditch the multi-stakeholder process, let’s go to more national, intergovernmental, multilateral regulatory processes. So, we must work together in this multi-stakeholder fashion to address these threats and to address the trends globally.”
Sovereignty as Collaboration, Not Isolation
Norway’s approach, presented in the Host Country Open Stage, demonstrated how sovereignty can coexist with international cooperation. Francis D Silva explained that “sovereignty doesn’t mean isolation. On the contrary, we believe that sovereignty implies collaboration,” describing Norway’s approach as building “secure, values-driven digital infrastructure, sovereign but also transnational.”
Personal vs. National Digital Sovereignty
An alternative perspective emerged from the discussion on democracy, where Yosuke Nagai argued for focusing on personal rather than national digital sovereignty: “To me, I think the focus a lot of times on national digital sovereignty is really misplaced. If we really want to protect our citizens, we need to give them the personal digital sovereignty.”
Addressing Real Challenges
The discussions also acknowledged real challenges that drive sovereignty concerns. In the connectivity gap workshop, Leon Cristian provided a concrete example: “for example, Starlink is operating in my country without permission. How this happened? Because my government asked Starlink to have a complaints office in order to operate in Bolivia, but since Starlink is so powerful and such a big company, they said, I don’t need that, I don’t need to put an office in your country.”
Conclusion
The discussions revealed that the tension between digital sovereignty and global interoperability is not insurmountable. Solutions include technical approaches like federated systems, policy frameworks that enable harmonization rather than homogenization, open-source digital public goods, and multistakeholder cooperation that respects legitimate sovereignty concerns while maintaining the global nature of the internet. The key is distinguishing between legitimate sovereignty needs and potentially harmful fragmentation, while building trust through collaborative governance approaches.
What could be the potential long-term impacts of the differing approaches to tech regulation adopted by China, EU, and USA?
The long-term impacts of differing tech regulation approaches between China, EU, and USA were discussed across multiple IGF 2025 sessions, revealing significant concerns about regulatory fragmentation, competitive dynamics, and geopolitical implications.
Regulatory Divergence and Business Impacts
Several sessions highlighted how different regulatory approaches are creating operational challenges for businesses. In Lightning Talk #65, Arne Byberg observed that “we see businesses starting to build up double and triple AI initiatives simply to cope with the various regulations of the different regions. And as everyone understands, that is costly and a lot of time wasted, actually.”
The Three Regulatory Models
The distinct approaches of each region were characterized in Day 0 Event #174 (afternoon session), where Jacqueline Pigatto described the US with “technical sector and an infrastructure dominance, and this very market freedom approach,” the EU with “normal setting, with transnational reach, very human-centric, human rights approach,” and China with “directed state supervision, with some predictability, very planned, and very proximity with the central governments and the private companies.”
EU’s Regulatory Challenges
The EU’s approach, particularly the AI Act, faced criticism for potential overregulation. In Main Session 2, Melinda Claybaugh noted that “The EU, for example, was very fast to move to regulate AI with the landmark AI Act. And I think it’s running into some problems. I think there’s now kind of a consensus amongst key policymakers and voices in the EU that maybe we went too far.” This led to practical consequences, as mentioned in Day 0 Event #251, where Kojo Boake observed that “huge players, and I suspect small ones, had seen so much uncertainty by that form of regulation that they held off launching some of the products that would be so valuable. So, for example, Meta delayed the launch of Meta AI on WhatsApp and Facebook” until regulatory clarity emerged.
US Protectionist Approach
The US approach was characterized as increasingly protectionist. In Day 0 Event #255, Marwa Fatafta noted that “the Trump administration is taking an extremely protectionist approach to their tech sector. They see it as a sector that needs to be protected against regulation and accountability and particularly from the EU or in fact any other state” that may try to regulate US companies.
Chinese Model and Internet Fragmentation
China’s approach was discussed in the context of digital sovereignty and internet fragmentation. In the Policy Network on Internet Fragmentation session, Marilia Maciel mentioned China as an example where “they became independent pretty much and they don’t depend on external service. So they are able to block like Google and Wikipedia and many other services because they have reproduced it internally.”
Geopolitical Competition and Market Dynamics
The competitive dynamics between these approaches were highlighted in Day 0 Event #252, where Anya Schiffrin warned that “If these companies exit, Chinese technology will take over. So, TikTok or whoever will just take on this job.” She also noted that “The US government has made it clear that it opposes both tech regulation and taxation all over the world.”
Regulatory Fragmentation and Trust Issues
The fragmentation created by these different approaches was seen as undermining cooperation. In WS #438, Moritz von Knebel warned that “China’s governance system, which focuses on algorithms, differs fundamentally from the EU, which is more risk based. The US, the UK have emphasized innovation friendly approaches, and that creates a regulatory patchwork that is difficult to navigate” and creates incentives for regulatory arbitrage.
Impact on Developing Countries
Several sessions discussed how these divergent approaches affect developing countries’ regulatory choices. In WS #214, Lacina Kone emphasized that Africa should pursue its own path, stating that “Based on the fact that if you look at North America, everything is based on the private sector base, which is at the heart of the capitalism. When we look at this, it’s based on the control of the government. And Africa wants to go with the user-centric approach” rather than adopting wholesale from other regions.
The discussions across these sessions suggest that the long-term impacts of these divergent regulatory approaches include increased business costs, regulatory arbitrage, geopolitical competition, and pressure on developing countries to choose between different models, potentially leading to further fragmentation of the global digital ecosystem.
How do we balance the need for global coordination on tech governance with the importance of context-specific, localised approaches?
Balancing Global Coordination with Localized Approaches in Tech Governance
The question of how to balance global coordination on tech governance with context-specific, localized approaches was extensively discussed across multiple sessions at IGF 2025, revealing a complex landscape of challenges and emerging solutions.
The Core Tension
Several speakers articulated the fundamental challenge. As Joanna Bryson noted: “all different countries have different priorities, capacities, risks, and it makes sense that we have at least some diversity in legislation.” Similarly, Jhalak Kakkar emphasized the unique nature of AI governance: “The way, you know, a toaster works in the United States, versus the way it works in Japan, versus the way it works in Vietnam or India, it is pretty much the same, but, you know, AI as a technology will be shaped in its, in the way it functions, in the way it impacts very differently in different contexts.”
Principles Over Uniformity
A recurring theme was the need for shared principles rather than uniform solutions. Paula Gori articulated this approach: “the regional specificities, they rightly so also have differences. And this is normal. I don’t think we will ever, ever get to something which is global in this sense, but this is fine. As long as the principles are shared and the principles are all agreed.”
In AI governance specifically, Yoichi Iida emphasized interoperability: “different countries, different jurisdictions have different backgrounds, different social or economic conditions, so the approaches to AI governance have to be different from one from another, but still, that is why we need to pursue interoperability across different jurisdictions, different frameworks.”
The Digital Public Goods Model
Several sessions highlighted Digital Public Goods (DPGs) as a promising approach to this balance. Thomas Davin explained: “countries can freely adopt and adapt digital public goods and use them to build components of safe, inclusive and interoperable digital public infrastructures according to their own priorities and context specific needs of course.”
The Importance of Local Understanding
Multiple speakers emphasized that successful tech governance requires deep local understanding. Maarten Botterman noted: “without local understanding of what’s needed and what can help, it’s very difficult to lend anything there successfully… If it’s about inclusion, you need to work with locals. You need to make sure that you reach out.”
In the context of digital identity, Debora Comparin emphasized the importance of listening: “We don’t have the arrogance of thinking that we’re just going to sit somewhere and define what’s going to happen, how all this digital identity infrastructure should be built, and just go off and share our vision with the world. It’s really about hearing people, hearing about what are the difficulties locally in the different countries.”
Regional Approaches and Harmonization
Regional coordination emerged as a key middle layer between global and local approaches. Folake Olagunju articulated this approach: “It’s about harmonisation at the regional level, but not homogenisation. So yes, we need to harmonise because we’re a regional bloc, we have similarities, but then it needs to be homogeneous in a certain extent so that it’s tailored to the different nuances of each member country.”
From an African perspective, Lacina Kone emphasized: “what really matters is not like one size should fit all, because each country has a sovereignty you have to take into account. But we have to make sure that each size should fit together.”
Challenges in Implementation
Several speakers highlighted the challenges of avoiding cookie-cutter approaches. Luca Belli warned: “I think that my first suggestion, coming back to my previous comments, would be not to copy and paste what has been done in Estonia or Norway, because it will likely not work in Ecuador or Zimbabwe, because you have to think about local realities first.”
Thomas Linder reinforced this point: “It will never work to simply take one model, one cookie-cutter approach, and replicate it all across the world, certainly not in a place as diverse as Africa. It just doesn’t work that way. You need civil society organizations with a deep embedded understanding of the local conditions who can help to do this integration, adaptation, and operationalization.”
The Role of National and Regional IGFs
National and Regional Internet Governance Forums (NRIs) were frequently mentioned as crucial mechanisms for bridging global and local approaches. Maria Fernanda Garza explained: “The NRIs are uniquely situated to engage with local governments… these insights are crucial for developing informed policies that both safeguard users and preserve the benefits of an open internet.”
Multi-stakeholder Approaches
The multi-stakeholder model was consistently presented as essential for balancing global coordination with local needs. Charlotte Scaddan emphasized: “we cannot take a cookie-cutter approach, right? I mean, you know, we absolutely need to look at national context when rolling out.”
Emerging Solutions
Several innovative approaches were discussed. In the context of AI sandboxes, cross-border collaboration was highlighted as a way to maintain local relevance while enabling global learning. <a href="https://dig.watch/event/internet-governance-forum-2025/ws-294-ai-sandboxes-responsible-innovation-in-developing-countries?diplo-deep-link-text=AI%20affects%20us%20all%2C%20you%20know%2C%20across%20regions%20so%20in%20a%20sense%2C%20you%20know%2C%20what%20can%20we%20do%20to%20really%20unite%20across%20borders%2E%2E%2E%20joint%20AI%20regulatory%20sandboxes%20as%20a%20policy%20mechanisms%2E%2E%2E%20
How can we create more effective mechanisms for civil society participation in tech policy-making that go beyond token consultations?
Creating Effective Mechanisms for Civil Society Participation in Tech Policy-Making Beyond Token Consultations
The discussions across various IGF 2025 sessions revealed significant concerns about tokenistic civil society participation and proposed numerous mechanisms for meaningful engagement in tech policy-making processes.
The Problem of Tokenism
Multiple speakers highlighted the pervasive issue of tokenistic participation. Jenna Fung from Asia Pacific Youth IGF emphasized that “The truth is youth participation in Internet governance remain at best tokenistic and more often structurally excluded” and criticized approaches where “Hosting a youth submit filled with top-down speeches from leadership is not cross-generational dialogue”.
In disability inclusion contexts, participants emphasized that “representation must not be symbolic, but it must be standard and intentional” and that “true inclusion must mean more than just inviting people into the room, but also preparing the room for them”.
Moving Beyond Superficial Consultations
Several speakers addressed the inadequacy of current consultation processes. Abed Kataya noted during cyber laws discussions that “In other countries, actually they do hear, they do public consultation with the civil society, but then they do whatever they want”.
Jhalak Kakkar emphasized in Main Session 2 the need for “participation that actually is meaningful… participation that actually impacts outcomes and outputs”.
Structural and Systemic Solutions
Participants proposed several structural reforms to enable meaningful participation:
Early and Continuous Engagement
Thomas Davin highlighted during child safety discussions that “once you engage children it has to be meaningful, if you want it to be meaningful it means action needs to be taken and needs to be visibly taken”.
Susan Mwape emphasized in DPI stakeholder mapping the importance of “inclusion by design” rather than bringing civil society “to the table much too late in the process”.
Power Redistribution and Co-Creation
Juan Carlos Lara argued in WSIS+20 discussions that “inclusive governance requires some degree of joint capacity to make decisions, to redistribute power” and called for “co-creation” rather than “top-down diagnostics”.
Ms. Ching shared Malaysia’s successful approach in the Norway session, noting that “more than 90% or at least 85% of the content actually comes from the civil society itself” in their Malaysian Media Council bill.
Addressing Barriers to Participation
Financial and Capacity Constraints
Michel Lambert highlighted critical funding challenges in PNIF discussions, noting that “up to 80% of this funding is being cut for this year and the year beyond”.
Iria Puyosa emphasized in LAC Space discussions that “the survival of the Internet multi-stakeholder model of governance is tied to the survival or sustainability of civil society organizations”.
Technical and Institutional Barriers
In technical standards discussions, speakers identified specific barriers including membership fees and technical language. Natalie Turkova suggested “dedicated seats or specific roles that we can set up for them” and removing financial barriers.
Practical Mechanisms and Models
Multi-Stakeholder Advisory Bodies
Several successful models were highlighted. Brazil’s approach was praised, with Beatriz Costa Barbosa noting in NRI discussions that “the Brazilian data protection authority has made efforts to include public participation in its rule-making and regulatory process”.
Community Data Governance
Melissa Omino proposed innovative approaches in African NLP discussions, advocating for “community data sovereignty” where communities are “legally recognize[d] as collective data stewards with inherent rights to govern data”.
Transparent Feedback Mechanisms
Tatjana Trupina emphasized in cooperation discussions that “it is important for stakeholders to see how their input is actually taken into account”.
Sector-Specific Approaches
AI Governance
Virginia Dignam highlighted in AI policy research the need to “integrate not only the academic research, but again, different, different types of knowledge from indigenous knowledge, from contextual knowledge”.
Platform Governance
Janjira Sombatpoonsiri advocated in platform governance discussions for “a pluralist approach” that “moves beyond the legal sphere to encourage participatory fact-checking, structured pre-banking efforts, and community moderation initiatives”.
Regional and Local Implementation
What are the implications of developed countries exporting their digital governance models to the Global South through development aid and capacity building programmes?
The discussions across multiple IGF 2025 sessions revealed significant concerns about the implications of developed countries exporting their digital governance models to the Global South through development aid and capacity building programmes. The issue was characterized by several key themes:
Critique of Current Approaches
Several speakers highlighted the problematic nature of current development aid approaches. In WS #231, Diya addressed the “unhealthy tension between digital development and human rights” and identified the problem of “norm shapers versus norm takers” where “some of the norm shapers are kind of also deciding what gets done in some of the Global South countries.” Franz acknowledged this issue, noting there’s “an unhealthy tendency sometimes that some of the aid projects they come basically saying like okay here’s your solution we bring it and this is what we want to bring and so it’s a very much supply oriented approach” rather than looking at how governments can procure services within their own legal systems.
Digital Colonialism Concerns
The concept of digital colonialism emerged as a central concern. In the Welcome to Norway session, Ms. Ching warned about “a slide towards a new form of digital colonization where a handful of powerful states and corporations detect the rules, standards, and norms for the rest of the world.” She emphasized the need for Global South countries to move “from being mere consumers of technology to becoming co-creator of our shared digital future.”
Copy-Paste Problem and Context Sensitivity
Multiple sessions highlighted the dangers of simply copying governance models without adaptation. In Day 0 Event #257, Luca Belli cautioned that “copying and pasting from Europe is not necessarily the best option” and emphasized studying “developing world approaches to data governance, and maybe not only focusing on the most developed countries.”
In Parliamentary Session 1, an audience member from Africa questioned whether they should be “depending on the central top-down presentation as we saw in data protection. We saw the models that were taken was an American model developing on one side of the globe, the GDPR developing in Europe, and the rest of the world being told to follow in forming their laws in that way.”
Foreign-Driven Agendas and Funding
Several sessions raised concerns about foreign-driven agendas in development aid. In Open Forum #67, moderator Alison Gilwald noted that “we have a scattering of different AI blueprints, things that have come at different times doing different things, representing different interests. A lot of them other than the continental strategy, foreign funded, foreign driven, different agendas on those.”
In WS #214, Shikoh Gitau asked: “But who is drafting these policies? What agenda do they have? Do they have Africa at heart when they are doing this? Those are the questions you should be asking.”
Specific Examples of Model Export
The discussions included concrete examples of governance model export. In Open Forum #56, it was noted that “many African nations do have data protection laws and largely modeled after the GDPR.” Similarly, in Open Forum #17, several countries mentioned using the EU AI Act as a model.
Inadequate Consultation and Cultural Misalignment
The issue of inadequate consultation was highlighted in Networking Session #37, where Vivian Affoah mentioned that “Sometimes we find that a lot of these initiatives are as a result of, let’s say, pushed by the donor community, or the World Bank brings funding to the government, you need to get ID cards or ID system, national ID for your people. So they start implementation right away without consulting.”
In Parliamentary Session 3, Neema Iyer noted that “legislative frameworks are often too narrow. They, you know, they focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women.”
Alternative Approaches and Solutions
Despite the concerns, some sessions highlighted more positive approaches. In High Level Session 5, Fabrizia Benini discussed the EU’s approach: “We are setting out what we call the EU offer, a set of tools that will allow, through very much the use of open source, those partner countries to take them and adapt them to their own internal uses” with the objective to “make sure that we all become, in fact, actors, not only consumers.”
The discussions ultimately highlighted the need for more equitable, consultative, and contextually sensitive approaches to international digital cooperation, moving away from one-size-fits-all solutions toward genuine partnership and co-creation models.
How do we ensure that the interests, priorities, and realities of developing and least-developed countries are better represented and considered in global digital governance processes?
Ensuring Better Representation of Developing and Least-Developed Countries in Global Digital Governance
The question of how to better represent the interests, priorities, and realities of developing and least-developed countries (DLDCs) in global digital governance processes was extensively discussed across numerous sessions at IGF 2025, revealing both systemic challenges and emerging solutions.
Current Representation Gaps and Challenges
Multiple speakers highlighted significant representation gaps in current global digital governance processes. Diya pointed out: “you go to the IGF, look around you, there’s just so few Global South representatives, whether it’s governments, whether it’s civil society, whether it’s the private sector.” This underrepresentation has practical consequences, as Renata Mielli from the LAC region noted: “As new processes and discussion forums emerge in different agencies… it becomes more difficult to guarantee the effective participation of the countries of the global south, of small countries, of civil society and even of the private sector, which are not so economically powerful.”
Structural Barriers to Participation
Several sessions identified concrete barriers preventing meaningful participation. The Closing Ceremony highlighted practical obstacles, where Jacline Jijide shared: “Because Malawi does not process Schengen visas, I had to travel over 1,800 miles by bus to Pretoria, South Africa… if participants from the global South must overcome such barriers just to attend this, then it’s a challenge.”
Economic barriers also limit participation in technical standards development. In WS #241, Andrew Campling addressed barriers to participation: “There are other barriers though. So the cost, there is a real cost to attend. The ITF, for example, to try and diversify attendance, we meet in different parts of the world, rotate each time. So there are pretty extensive travel costs, hotel costs and so on. There’s also the cost of people’s time.”
The Importance of IGF for Global South Representation
Many speakers emphasized the unique role of IGF as an inclusive platform. In the Welcome session, Minister Moorosi stated: “The IGF is one of the few truly inclusive platforms where we can engage global actors on our terms, raising issues such as rural connectivity, digital literacy, cyber security and digital trust. Without it, we run the risk of being sidelined by models that prioritizes commercial or geopolitical power over equity and development.”
Regional Cooperation and Unified Voices
Several sessions highlighted the importance of regional coordination to amplify Global South voices. In Open Forum #24, Ayman El-Sherbiny called for “strengthening the participation in the global processes” and noted that “More than half of them, they need really active fellowships and support to participate in the global IGF.”
The African Union’s approach was particularly emphasized in Open Forum #43, where the Tanzania declaration called for “elevating African digital influence, enhancing intra-Africa coordination to ensure effective and sustainable African engagement global is forum.”
Addressing Power Imbalances with Tech Companies
Several speakers addressed the power imbalances between Global South countries and major tech platforms. Pamella Sittoni highlighted African countries’ vulnerability: “when you look at Africa’s situation, for example, we find ourselves in a situation where we can’t really have the bargaining power against these companies. We look at a company like Google or Meta, and if they pulled out of Africa, what difference would it make to their bottom line? Obviously none, but what impact would it have on the information flow in that part of the world? A great impact.”
Capacity Building and Technical Standards Participation
Multiple sessions emphasized the importance of capacity building to enable meaningful participation. In WS #226, Amrita Choudhury described capacity building initiatives: “So there were volunteers who went, worked with the technical community to develop programs, which could be undertaken in technical engineering colleges, et cetera, so that the skills can be built up.”
The importance of meaningful participation in technical standard-setting was emphasized in WS #241, where Makola Honey emphasized: “meaningful participation of the Global South in technical standard-setting is very important. So we’re not just adapting to decisions, but we are helping shape them.”
AI Governance and Global South Inclusion
The discussions on AI governance highlighted particular urgency for Global South inclusion. In Open Forum #30, Abhishek Singh emphasized: “we need to bring countries of Global South at the decision-making tables.” The African perspective was particularly highlighted in Open Forum #67, where Adil Suleiman stated: “we are also advocating for a seat for Africa when it comes to AI policy-making and AI governance. I think it’s very important… we need to position ourselves more when it comes to the global decision-making when it comes to AI, I think it’s very crucial that we are part of this decision-making process.”
Funding and Resource Challenges
Financial constraints emerged as a recurring theme limiting participation. Iria Puyosa highlighted how funding cuts affect Global South civil society: organizations in <a href="https://dig.watch/event/internet-governance-forum-2025/networking-session-200-
What is missing in our current approaches to addressing digital divides and why are we not there yet?
What is Missing in Our Current Approaches to Addressing Digital Divides and Why Are We Not There Yet?
Despite decades of effort and investment, the digital divide remains a persistent global challenge, with 2.6 billion people still offline according to multiple speakers across sessions. The discussions at IGF 2025 revealed fundamental gaps in current approaches and systemic barriers that prevent meaningful progress.
From Dialogue to Action: The Implementation Gap
A recurring theme was the need to move beyond endless discussions to concrete action. As Minister Moorosi emphasized in the Welcome to Norway session, “My last reflection is that really I feel like we’ve talked enough. I feel like we’ve had too many dialogues. I think it’s about time we act”. This sentiment was echoed by Raj Singh in WS #231, who noted that “we seem to be creating new digital digital divides constantly we’re not stopping… we were talking about stuff 30-20 years ago in a slightly different guise it was ICT4D we’re still talking about the same issues some of those issues have not been solved”.
Beyond Infrastructure: The Usage Gap
A critical insight emerged that the problem has evolved from a coverage gap to a usage gap. William Lee from ITU noted in Open Forum #83 that “one of the things we have seen is that the problem is now less and less a coverage gap and more and more a usage gap”. Alison Gillwald reinforced this in WS #484, highlighting that “we’ve got least developed countries throughout Africa, Rwanda, Uganda, Mozambique with above 95% coverage, some of them 99% coverage, and yet they have less than 20% connectivity”.
Redefining Meaningful Connectivity
Current measurement standards were heavily criticized as inadequate. Onica Makwakwa pointed out in WS #225 that “the standard of a connected person is someone who uses the internet once every three months is so underwhelming”. She further emphasized in the PNMA Concepts Portfolio session that we need “raising the standard to go beyond just basic access to begin to address meaningful connectivity”.
Affordability as the Primary Barrier
Multiple sessions identified affordability as the most significant barrier. Andrew Lewela noted in WS #204 that “At the top of that list, actually, is still affordability. Affordability of network plans or internet plans, affordability of devices”. This was reinforced by Doreen Bogdan-Martin in the Opening Ceremony, who stated that “Fixed broadband can cost up to a third of household incomes”.
Data and Measurement Challenges
The lack of disaggregated data emerged as a critical gap. Onica Makwakwa highlighted in WS #225 that “we are still struggling at just having data that’s segregated even by gender, believe it or not. In 2025, we are not collecting gender desegregated data”. Judith Hellerstein added in Open Forum #76 that “Many of the problems we find in countries is that while they may have a policy, there’s no enforcement because they don’t have the metrics behind it”.
Financing and Sustainability Challenges
The funding crisis was identified as a major obstacle. Francis Gurry noted in Open Forum #13 that “it’s estimated that next year there will be about 38% less development funding available around the world”. The sustainability of community projects was highlighted by a participant from Bangladesh in the NRI Collaborative Session who pointed out that “When community projects end, network is end”.
Language and Cultural Barriers
Language emerged as a fundamental barrier often overlooked in digital inclusion efforts. Virginia Paque stated in Lightning Talk #90 that “the largest, strongest challenge to multi-stakeholder inclusion and voices in global processes is communication. This challenge predates the digital divide. It underlies the digital divide”. Jennifer Chung emphasized in WS #144 the need for “a paradigm shift to thinking about looking at it as multilingual first as opposed to English first”.
Gender Digital Divides
Gender inequalities were highlighted as a persistent challenge. The High Level Session 5 noted that “Globally there were 244 million more men than women using the internet in 2023”. In WS #479, participants identified that “most of the time we perceive technology as gender neutral and policies are made that technology will benefit everyone equally”.
The Emerging AI Divide
As artificial intelligence becomes more prevalent, speakers warned of a new divide layering on top of existing ones. Dr. Rajendra Pratap Gupta noted in the Dynamic Coalition session that “we are prioritizing AI over access of internet and not having internet itself is a disability”. Wolfgang Kleinwachter observed in
Given the slow progress in addressing digital divides despite years of effort, what fundamental assumptions about digital inclusion might we need to challenge or rethink to make meaningful progress in the coming decade?
Based on discussions across multiple IGF 2025 sessions, several fundamental assumptions about digital inclusion need challenging to accelerate progress in addressing digital divides:
From Basic Connectivity to Meaningful Connectivity
A key paradigm shift involves moving beyond basic connectivity metrics. Fabio Senne explains that “among the most relevant conceptual shifts that we have in this recent period, is the idea that we also need to take care of meaningful connectivity, not just basic connectivity” citing Brazil where “almost 90% of the population, has some connections with the internet and use in a sense only 22% according to our estimates have a meaningful connectivity.”
From Individual to Collective Approaches
Traditional approaches focus on individual access rather than collective solutions. Fabio Senne notes: “for a long time that’s because we do mostly surveys or they interview one individual so we try to think about digital inclusion as an individual characteristic… but most of the problems are collectives, collective problems.”
From Capacity Building to Capability Building
From Technology-First to Solution-First Approaches
Adil Suleimana advocates: “I think we need to think more about not connectivity, but solutions. I think we need to think about also community, provide solution to the communities.” This challenges the infrastructure-first assumption by addressing actual community needs.
Challenging Sequential vs. Parallel Development
The assumption that basic infrastructure must precede advanced technologies is being questioned. Adil Suleiman states: “the questions in front of us, whether to first work on the fundamentals, and then do AI, or do them in parallel. And I think we don’t have options. We have to do everything at the same time.”
Rethinking Business Models
Pure commercial models are insufficient for universal inclusion. Onica Makwakwa argues: “the pure commercial model alone is not going to be a size that fits all communities, so we need to be open to the fact that… We have to be willing to think about subsidies to certain communities or co-op model connectivity.”
From Donor-Recipient to Partnership Models
Challenging Rights vs. Charity Frameworks
Malin Rygg advocates for a human rights framing: “The digital divide is no longer just about access, it is fundamentally about human rights.” Aydan Férdeline emphasizes: “we think that we shouldn’t see financial inclusion as charity or development aid or corporate social responsibility… it’s also a right, the right to be able to participate in the right to economic participation.”
From English-First to Multilingual-First
Manal Ismail states: “instead of making languages change technology, we need to start making technology serve languages.” This represents a paradigm shift from treating multilingual support as an afterthought to making it foundational.
Challenging Status Quo Approaches
Community-Led vs. Top-Down Approaches
These discussions suggest that meaningful progress requires fundamentally rethinking approaches from infrastructure-focused, individual-based, charity-oriented models toward rights-based, community-led, solution-first strategies that prioritize meaningful connectivity and collective empowerment.
What are the risks of over-emphasising quantitative metrics in measuring digital inclusion, potentially overlooking qualitative aspects of meaningful connectivity like empowerment, digital literacy, etc.?
Risks of Over-Emphasising Quantitative Metrics in Digital Inclusion Measurement
The discussions across multiple IGF 2025 sessions revealed significant concerns about the limitations of purely quantitative approaches to measuring digital inclusion, with many speakers advocating for more holistic frameworks that capture qualitative aspects of meaningful connectivity.
The Meaningful Connectivity Imperative
Several sessions emphasized the critical distinction between basic connectivity metrics and meaningful access. In WS #484, Alison Gillwald demonstrated this gap starkly, explaining that when Brazil “applied the Meaningful Connectivity formula” their “internet penetration figures previously above 80%…went right down to 20%”. She noted that “the figure more like the people who are really not digitally included, substantively digitally included, is much closer to 4, you know, 4 billion or 4.5 billion” when using meaningful connectivity definitions rather than basic access metrics.
This theme was reinforced in WS #225, where Onica Makwakwa advocated for “moving away from measuring on basic connectivity” and emphasized that “Meaningful connectivity is about daily access, especially when we are talking about the age of artificial intelligence”.
The Value of Qualitative Research Approaches
The importance of qualitative methodologies was extensively discussed in Open Forum #29. Onica Makwakwa emphasized “the qualitative approach. I think it’s helped us really, truly understand what the hidden gaps may be”, noting that “when you just do a regular survey, asking people, are you online, are you not online, all you would find out is that, no, I’m not online, but not really understanding what are the drivers behind that”. Pria Chetty added that their “methodological learning has been the value of having local participation in the collection of the data” and stressed the importance of “understanding the lived experiences”.
Beyond Output Metrics to Impact Measurement
In WS #231, Raj Singh specifically addressed the limitation of current measurement approaches: “If I look at a lot of the metrics that are being used, the metrics are very output-related, they’re not outcome-related”. This sentiment was echoed by speakers focusing on transformation and empowerment rather than simple connectivity numbers.
Christopher Locke in the same session emphasized the need for “being able to understand what the impact of a community network is, not just in profit and sustainability, not just in a very simple calculation of contribution to the economy, but to a very broad social impact network of what is the implications on health, on education, on a very broad range of factors”.
Alternative Measurement Frameworks
Several sessions presented innovative approaches to measurement that go beyond traditional quantitative metrics. In Day 0 Event #119, the ROMEX framework was highlighted for its multidimensional approach. Fabio Senne explained that “if you take just the main picture of access, basic access, you don’t see huge gaps in terms of access. But when it goes to meaningful connectivity in a deeper analysis, you can see very huge gaps”.
In WS #305, Marie Lisa Dacanay presented research using alternative metrics including “Development indexing” and “social return on investment” to capture broader impacts including “increased levels and capacities for inclusive human development” and “the empowerment of the community to control, govern, and manage internet and digital resources”.
The Digital Literacy and Skills Gap
Multiple sessions highlighted how quantitative metrics often miss critical aspects of digital literacy and empowerment. In Dynamic Coalition Collaborative Session, Dr. Muhammad Shabbir pointed out that “If we want the people to use the internet meaningfully we need to not just think only about those 2.7 billion who are not connected but 1.5 billion people who may be connected maybe in a well-developed country may have state-of-the-art devices with the high-speed internet but still be unable to use the internet”.
Measurement Challenges and Hidden Exclusions
Several sessions revealed how traditional metrics can mask significant exclusions. In WS #257, concerns were raised about how “people with chronic diseases, like leprosy, or people with disabilities, struggle with Aadhaar enrollment or authentication, leading to exclusion of those in need of healthcare services”, highlighting how quantitative coverage doesn’t capture meaningful access for marginalized groups.
In WS #479, Dr. Emma Otieno highlighted the lack of meaningful measurement: “What has lacked completely is measurement, the intentional and meaningful tracking, measurement, so that we can see what are the outputs out of these policies”.
The Need for Holistic Approaches
The discussions consistently pointed toward the need for measurement frameworks that balance quantitative data with qualitative insights, local participation, and focus on empowerment and transformation rather than simple connectivity numbers. As emphasized across multiple sessions, meaningful digital inclusion requires understanding not just who is connected, but how that connectivity translates into empowerment, skills development, and improved life outcomes.
How do we balance the growing emphasis on AI divides and governance with the need to address broader issues of digital inequality and infrastructure gaps, ensuring that the focus on AI does not overshadow other critical areas of digital policy that require attention?
The question of balancing AI governance with broader digital inequality was extensively discussed across multiple IGF 2025 sessions, with speakers consistently emphasizing that AI divides cannot be addressed in isolation from fundamental infrastructure gaps and connectivity challenges.
The Scale of the Challenge
Multiple speakers highlighted the massive scope of digital exclusion that persists. As Dr. Rajendra Pratap Gupta noted, “I think overly we are prioritizing AI over access of internet and not having internet itself is a disability, I would say.” The magnitude is striking – Chengetai Masango observed that “We still have 2.5 billion people who don’t have access to the internet. And this is something that we need to see if we can reduce that gap, because now we have people who don’t have meaningful access, and then now we’re having the AI divide as well. So now we’re having two divides that are operating at the same time.”
Infrastructure as Foundation
Speakers consistently emphasized that basic infrastructure remains the prerequisite for any advanced technology adoption. Salima Bah emphasized that “when we talk about technology or digital transformation, infrastructure, the availability of infrastructure plays a significant role, and too many within that, you’re talking about electricity being one critical aspect, you can’t do digital transformation if you don’t have regular and stable electricity.”
Cheryl Miller reinforced this point: “we have to remember that there are some areas of this world where they don’t have energy, and it takes massive amounts of energy to power AI. And so that’s something that we need to think about, how we manage that and how we prepare for that.”
The Compounding Nature of Digital Divides
Several speakers highlighted how AI threatens to exacerbate existing inequalities. Francis Gurry warned that “artificial intelligence now is another general purpose technology that will exacerbate or risk exacerbate the digital divide.”
Leon Cristian described the “double digital divide”: “fourth, the increasingly complex technologies that today require an infrastructure and a computing capacity that our countries don’t have. Now the digital divide is not only about having or not meaningful connectivity, it’s also about having enough capacity to run AI’s, quantum computation, blockchains, cryptos and all those technologies.”
Parallel Development Approaches
Rather than viewing these as competing priorities, many speakers advocated for simultaneous approaches. Adil from the African Union acknowledged: “I think the questions in front of us, whether to first work on the fundamentals, and then do AI, or do them in parallel. And I think we don’t have options. We have to do everything at the same time, which is always our challenge in every area.”
Lacina Kone from Smart Africa argued: “some people said, oh, Africa, you might be so behind because you only have a 40% of your population covered with the internet use, then why are you talking about AI? That’s not the question. Africa is not looking for the most powerful AI, it’s looking for the most useful one, looking at the agriculture, looking at the healthcare and looking at the education.”
The Need for Holistic Approaches
Speakers emphasized that effective digital governance requires comprehensive, integrated approaches. Yu Ping Chan explained the interconnected nature: “This is also not to say that it’s not even just about AI, right? Because even before we have AI, we need to have data. Before we have data, we need to have basic connectivity. Before we have connectivity, we also have to talk about things such as infrastructure and energy, all of which are challenges for the global sub-countries across the globe.”
Flavio Vagner highlighted the dual nature of the challenge: “So first of all, meaningful connectivity. This is still a major challenge. Billions remain still offline, particularly in least developed countries and marginalized communities… A second issue is the rapid emergence of artificial intelligence everyone is talking about. Its impact both positive and negative continues to expand affecting labor, ethics, the environment and still deserve careful assessment by the society.”
Governance Framework Relevance
Several speakers noted that existing frameworks remain relevant for addressing both traditional and emerging challenges. Jimson Olufuye emphasized: “The WSIS action lines, they are really still very, very relevant to address any emerging issues. And for that matter, emerging issues like artificial intelligence, data governance, information integrity… Even many more will still emerge, but from a very close and constructive examination, the 11 Wishes Action Line covered anything that could come up, at least from my perspective.”
Call for Balanced Prioritization
The discussions revealed strong consensus that AI governance must not overshadow fundamental digital inclusion efforts. Dr. Rajendra Gupta argued: “While the narrative has shifted to artificial intelligence, for me still, I think that 2.6 billion people do not have access to internet. That means we are keeping them out of the economy of the current times, and we live in digital age.”
Senator Catherine Mumma called for investment beyond regulation: “Beyond regulation, we need to give some financial investment in the necessary public digital infrastructure that would see those in rural areas equally participating in the benefits of the digital space and technology.”
The consensus across sessions was clear: while AI governance is crucial, it must be pursued alongside, not instead of, addressing fundamental digital inequalities and infrastructure gaps that continue to exclude billions from the digital economy.
How can we ensure that efforts to promote digital financial inclusion don’t expose vulnerable populations to new forms of exploitation?What are the risks of over-emphasising STEM education at the expense of humanities and social sciences in preparing for the digital future? And how can we address them?
No relevant discussions found.
How can we better coordinate capacity building efforts among development agencies and partners to avoid duplication and maximise impact?
The question of coordinating capacity building efforts among development agencies to avoid duplication and maximize impact was extensively discussed across multiple sessions at the IGF 2025, with speakers consistently identifying this as a critical challenge requiring urgent attention.
The Scale of the Duplication Problem
The extent of duplication in development efforts was starkly illustrated in WS #231 Address Digital Funding Gaps in the Developing World, where Raj Singh emphasized: “Each time I look at what everyone’s doing, including my own organisation, it’s also very clear to me that there’s actually, we talk about collaboration, but there’s actually very little of it. Everyone’s got their specific objectives they have to do something and they go out and try and do it… there still is a lot of duplication out there.” This was further reinforced by an online participant who noted there were “like 60 different funds that were available to stimulate something of the Internet in the region. Impossible to see where they overlap, how they connect, etc.”
Impact on Developing Countries
The proliferation of uncoordinated processes particularly burdens developing countries with limited resources. In Open Forum #83 ITU Call for Inputs on the WSIS+20 Review, Cuba highlighted how “the increase in the amount of governance mechanisms and processes in the digital world make it difficult for many Member States, particularly developing countries, to participate.” Similarly, in WS #343 Revamping decision-making in digital governance, Jennifer Chung emphasized that “avoiding duplication of efforts not only helps, you know, non-governmental stakeholders, people with less resources, especially in APEC, governments themselves, especially in Asia Pacific, they have very small teams with a very wide portfolio, and they’re not able to be able to actually deep dive or follow every single process if it’s proliferated around.”
Successful Coordination Models
Several speakers highlighted successful approaches to coordination. In Open Forum #66 the Ecosystem for Digital Cooperation in Development, Tale Jordbakke from NORAD described their strategy: “We need to work on these challenges together, and as an agency, Norwegian government agency, we have been taking early risk by providing catalytic funding for digital public goods, and by ensuring pool resourcing, working through mechanisms like Digital Public Goods Alliance 50 and 5, working with co-develop, we can make sure that we avoid duplication and maximize impact.”
The Internet Society demonstrated effective coordination through their network approach. In How the PNMA Concepts Portfolio effectively contributes to the WSIS+20 Process and GDC implementation, Joyce Dogniez explained: “We currently work with over 20 partners in our 120 chapters across all regions, and we built a global network of over 200 local trainers… we focus on building the local capacity to train people who can then train others to bridge that gap. We can’t be everywhere, but people at the grassroots, in the local communities, they are.”
The Importance of Local Partnerships
Multiple speakers emphasized the critical importance of local partnerships in effective coordination. In the same session on digital funding gaps, Maarten Botterman noted that without local contact, “how can you successfully lend a global donation?” and referenced the Global Forum for Cyber Expertise as helping to link global knowledge with local needs.
Regional Coordination Initiatives
Regional organizations have emerged as important coordination mechanisms. In Open Forum #43 African Union Open Forum Advancing Digital Governance and Transformation, Maktar Sek directly addressed duplication: “You know, we have several organization, even at UN, we have several organization doing same thing, sometime… where we have a duplication, we have to avoid, it is the policy now. If you have two agency doing the same thing, we are going to merge them, it’s clear.” He emphasized looking at the “comparative advantage of each organization, and each organization who has this comparative advantage should lead any project related to each sector.”
UN System Coordination
Several speakers highlighted ongoing UN system efforts to improve coordination. In Day 0 Event #262 Enhancing the Role of the IGF Through Gdc Follow Up and WSIS, Gitanjali Sah noted: “So we will deepen our coordination through the United Nations Group on Information Societies, ensuring that there is no duplication and that we are working in sync based on each other’s mandates and priorities to ensure that we have a UN system-wide digital and all these actions are aligned and they’re complementary.”
Resource Constraints and Efficiency
The urgency of coordination was underscored by resource constraints. In Open Forum #60 Cooperating for Digital Resilience and Prosperity, Torbjörn Fredriksson from UNCTAD directly stated: “Unfortunately, there are still too many examples of duplications of work, something that we need to minimize, especially in these times of shrinking resources for technical assistance and capacity building, as well as for dialogue.”
Practical Solutions Proposed
Several concrete solutions emerged from the discussions. In Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC, Minister Moorosi from Lesotho suggested: “the UN being deliberate to bring everyone together and see where the duplications are and eliminate the duplications.”
What can we expect from the Working Group on Data Governance established within the Commission on Science and Technology for Development?
The Working Group on Data Governance established within the Commission on Science and Technology for Development (CSTD) was mentioned in several sessions at IGF 2025, though detailed discussions about specific expectations were limited.
The working group was acknowledged at the highest level during the Opening Ceremony, where UN Secretary-General Antonio Guterres noted that “In Geneva, a new United Nations multi-stakeholder working group is advancing principles on data governance and sustainable development.”
Regional perspectives on the working group emerged, with Peace Oliver Amuge from AFRICIG stating during Parliamentary Session 4 that “And this year, we are finalizing on our output that is giving a recommendation to the CSTD working group on data governance.”
The strategic importance of the working group was highlighted in Open Forum #64, where a participant named Jackie emphasized that “data is a very strategic and key asset for both AI and the digital economy and with that I just want to share with you that we have recently established a multi-stakeholder working group on data governance so hopefully that could provide some recommendation on how we can develop a good data governance framework.”
However, participation challenges were noted. During Networking Session #37, Abraham from Geneva/Austria, who identified as a member of the working group, observed that “when these kind of working groups are announced, people are kind of reluctant, both diplomats, government officials, and civil societies.”
The working group was also referenced in the context of broader UN digital cooperation efforts, with mentions in Open Forum #18 and Open Forum #48, though without detailed discussion of expected outcomes.
Who can do what to achieve the desired interoperability of data systems and data governance arrangements, considering the fact that there are different interests and priorities among and between countries, companies, and other stakeholders?
The question of achieving interoperability of data systems and data governance arrangements across different stakeholders with varying interests and priorities was extensively discussed across multiple IGF 2025 sessions, revealing a complex landscape requiring multi-stakeholder collaboration, regional coordination, and technical standardization.
Multi-stakeholder Collaboration as the Foundation
Several sessions emphasized that no single entity can achieve interoperability alone. In the digital identity workshop, Debora emphasized: “it’s really about collaboration. There’s not a single organization or individual or bright mind in the world that can do it on their own. And so we pulled together an ecosystem of private sector, public sector, so government, research institutes, standard bodies, to collaborate and make this reality.”
The open-source AI forum reinforced this approach, with Tobias Thiel noting: “what’s really essential there is the need to have really collaborative partnerships where you bring the variety of different perspectives together, and that’s the public sector or regional institutions like the African Union, that’s the private sector, that’s academia, that’s civil society.”
Regional and Working Group Approaches
Multiple sessions highlighted the importance of regional coordination and specialized working groups. Dr. Jimson in the digital identity session suggested: “we need to constitute maybe a working group so that we could actually identify the gaps that we have across our sub-regions and regions, identify the champions with regard to parameters like a trust framework.”
The regional data governance forum demonstrated practical approaches to harmonization. Folake Olagunju from ECOWAS explained: “We’re looking at harmonisation at the regional level, but not homogenisation. So yes, we need to harmonise because we’re a regional bloc, we have similarities, but then it needs to be homogeneous in a certain extent so that it’s tailored to the different nuances of each member country.”
Technical Standards and Frameworks
Technical interoperability emerged as a crucial foundation. In Main Session 2, Jovan Kurbalija proposed focusing on specific technical elements: “If my knowledge is qualified by one company and I want to move to the other platform, company, whatever, there are no tools to do that. My advice would be to be very specific and to focus on the standards for the weights and then to see how we can share the weights and in that context how we can share the knowledge.”
The DPI data governance workshop highlighted continental frameworks, with Souhila Amazouz describing the African Union’s approach: “interoperability framework for digital ID, which is also a continental framework that was adopted in 2022. It aims to create space for countries to agree on minimum standards and technical parameters, and also harmonization of policies and regulations.”
Public-Private Partnerships and Open Standards
The role of public-private partnerships in achieving interoperability was emphasized across multiple sessions. In the cloud sovereignty discussion, Agustina Brizio highlighted the importance of “prioritizing local providers, you’re basically thinking about a more open standard guaranteeing portability and interoperability from a regulatory framework.”
The DPI open source forum emphasized regulatory collaboration, with Larry Wade stating: “it’s being able to bring the regulators and governments along the journey with you. And it has to be a public-private partnership”
Breaking Down Data Silos
Addressing organizational and technical silos emerged as a critical challenge. The public sector data governance session revealed practical challenges, with Nancy Kanasa noting: “the Department of Health would say that, oh, my act, we are mandated, and our act does not allow us to share data. But then the very same data is also required by the Department of Education.”
Pilot Programs and Learning from Success Stories
Several sessions emphasized learning from successful implementations. DG Abisoye in the digital identity workshop suggested: “looking at countries that have sort of matured on the identity level from Ghana, Rwanda, Nigeria, we need to ensure that we take those countries and then run a pilot for cross-border interoperability.”
Federated and Distributed Governance Models
The Fediverse session explored distributed governance approaches, with Delara Derakhshani explaining: “federated ecosystems are by design distributed, but that distribution inherently obviously comes with fragmentation… what DTI is doing is we’re focused on creating a sort of a shared governance infrastructure, not to centralize control, but to really provide coordination mechanisms that align responsibilities across diverse players.”
Avoiding Fragmentation Through Coordination
The risks of fragmentation without proper coordination were highlighted across sessions. In the AI governance forum, Melinda Claybaugh emphasized: “from a private company’s perspective, the challenge of running this technology and developing and deploying this technology that
How can we move away from the rather false dichotomy between data localisation and cross-border data flows, and focus on different approaches that combine localisation and free flows depending on the types of data?
The false dichotomy between data localization and cross-border data flows was addressed across several IGF 2025 sessions, with speakers proposing nuanced approaches that categorize data types and employ technical solutions to enable both sovereignty and interoperability.
Technical Solutions: Federated Systems and APIs
The most concrete technical approach was presented in WS #290 on Digital Identity, where Dr. Jimson outlined a federated database model: “Technically, I think federated database is ideal. And that’s what we use in Nigeria. And through API, you can connect other databases. So even across borders, countries can keep their data locally. And then through API, you can share the specific data categories that have been agreed upon based on policy framework.”
Data Categorization Approaches
Several sessions emphasized the importance of distinguishing between different types of data. In Open Forum #7 on Regional Data Governance, speakers highlighted varied approaches to sensitive versus non-sensitive data. Folake Olagunju discussed ECOWAS efforts to “define sensitive and non-sensitive data categories for our member countries. What we find is when you ask someone to share data, they’re a bit reluctant because they don’t know which one needs to be, which data needs to be sovereign and which data can be shared.” Meanwhile, Meri Sheroyan noted that “Armenia or any other country localize sensitive data such as biometric information or health records.”
Flexible Sovereignty Models
The Ukraine example presented in Day 0 Event #270 on Digital Autonomy illustrated how rigid localization can be counterproductive. Jeff Bullwinkel explained that “Ukraine had on its books a law that required government data to be stored within the borders of Ukraine. They suspended that law and that allowed Microsoft and some other companies to actually migrate their data from Ukraine across our own infrastructure in the European Union, paradoxically perhaps giving them data sovereignty by dispersing their digital assets across Europe.”
Cross-Border Frameworks and Trust Mechanisms
In WS #259 on Multistakeholder Cooperation, Flavia Alves referenced the OECD’s balanced approach: “there is an excellent work that has been done by the OECD called the Trusted Government Access to Data Flows… there was an agreement on how to approach data flows, to secure data flows, at least among those countries that are from OECD.” She emphasized that “data flows is also not only an economic issue, but it’s also a privacy and security issue, that we need to be careful and balance how we do the safeguards on privacy at the same time on law enforcement and others.”
Personal vs. National Data Sovereignty
A different perspective emerged in Day 0 Event #220 on Internet Credibility, where Yosuke Nagai argued for shifting focus from national to personal data sovereignty: “if you want to build a democracy, the data sovereignty that needs to be developed is personal data sovereignty. If laws and regimes protect personal digital sovereignty, which allows people’s data to be owned by themselves, regardless of where the platform is, that is protection to the end user.”
Collaborative Sovereignty Models
The Host Country Open Stage offered a reconciling perspective through Francis D Silva’s concept of collaborative sovereignty: “But sovereignty doesn’t mean isolation. On the contrary, we believe that sovereignty implies collaboration… We participate actively in European projects, essentially embracing transnational collaboration and supporting interoperability that is essential to make that happen.” He advocated for “secure, values-driven digital infrastructure, sovereign but also transnational” enabling “seamless data exchange across borders” while maintaining national control.
These discussions suggest that moving beyond the false dichotomy requires technical solutions like federated systems, clear data categorization frameworks, flexible policy approaches that can adapt to circumstances, and trust mechanisms that enable secure cross-border cooperation while respecting legitimate sovereignty concerns.
What are the implications of framing digital sovereignty primarily in terms of data control, while paying less attention to arguments related to technological capacity building?
The implications of framing digital sovereignty primarily in terms of data control while neglecting technological capacity building emerged as a critical concern across multiple IGF 2025 sessions, with speakers highlighting the risks of this narrow approach for developing nations and the Global South.
The Risks of Data Control-Centric Approaches
Several sessions revealed concerns about framing digital sovereignty solely around data control. In the Policy Network on Internet Fragmentation, Joyce Chen warned that “digital sovereignty has become sort of the buzzword for a lot of nations to talk about, or rather to frame, the way that they think about the internet and how they feel their citizens should use the internet. And that does rather disturb me, because a lot of the times where digital sovereignty is raised is often code word for something else.”
The Open Forum on Open-source AI as a Catalyst for Africa’s Digital Economy highlighted tensions between control-focused approaches and innovation. While Adil emphasized sovereignty as ensuring “African government and the African communities, the African families, they can control what is it, what part of AI they want to take, and what part of AI they don’t want to consider,” Kojo Boakye expressed concerns that “control, for me, negates in some ways or may, in some jurisdictions and with some governments, negate what I have a bias and believe in, which is the ingenuity, innate ingenuity and brilliance of young Africans.”
The Imperative of Technological Capacity Building
Multiple sessions emphasized that true digital sovereignty requires substantial investment in technological capacity. In the Day 0 Event on Everything in the Cloud, Agustina Brizio argued for “rethink the concept of sovereignty because when we usually talk about sovereign cloud initiatives or sovereign technologies, our mind instantly goes, okay, we need to have things within our territory developed by us and we have seen that this is not sustainable at a large scale. That requires too much investment and a lot of human talent that is not currently available.”
The Day 0 Event on Enhancing Data Governance in the Public Sector provided concrete examples of successful capacity-building approaches. Luca Belli explained how “India leveraged their very large population of highly skilled information society, information engineers and information and data scientists to create DPIs,” while China “lavished billions, literally, to produce the technology they wanted to have” resulting in technological independence.
Infrastructure Dependencies and Digital Colonialism
The Parliamentary Roundtable highlighted the severity of technological dependencies in Africa. John K.J. Kiarie pointed out that “even as we talk about Internet, everything about Internet is never manufactured in Africa. We do not manufacture the fiber optic cables. We do not manufacture the devices. We do not have a single satellite in the terrestrials.” He warned against “digital plantations” and emphasized the need for technological development responsibility from advanced countries.
This concern was echoed in the NRI Collaborative Session on Data Governance, where Kosi Amesinu from Benin raised fundamental infrastructure concerns: “We were talking about data, local data, but we don’t have data centre. Where do I put the data? Where do I put it? We need data centre first, green data centre.” Beatriz Costa Barbosa responded by advocating for countries to “build a public infrastructure to deal with the data from the dependent citizens, because this is important for your digital sovereignty.”
The Need for Balanced Approaches
Several sessions advocated for more nuanced approaches that balance data governance with technological capacity. The Policy Network on Internet Fragmentation distinguished between legitimate sovereignty concerns and fragmentation, with Marilia Maciel arguing that “the association between digital sovereignty and isolationism and fragmentation is not necessarily true. One thing does not necessarily lead to the other.”
The Open Forum on Local AI Policy Pathways emphasized the importance of building indigenous technological capabilities. Anita Gurumurthy called for “decolonize scientific advancement and innovation in AI. So how do we build our own computational grammar?” highlighting that sovereignty requires fundamental technological capacity rather than just data control.
Personal vs. National Digital Sovereignty
The Day 0 Event on Restoring Internet Credibility offered an alternative perspective through Yosuke Nagai, who argued that “focus a lot of times on national digital sovereignty is really misplace… If we really want to protect our citizens, we need to give them the personal digital sovereignty.” This perspective suggests that data control frameworks may miss the fundamental goal of empowering citizens.
Conclusion
The discussions across IGF 2025 sessions revealed a strong consensus that framing digital sovereignty primarily in terms of data control while neglecting technological capacity building creates significant risks, particularly for developing nations. This approach may lead to continued dependency, stifle innovation, and fail to address the fundamental infrastructure needs required for true digital autonomy. The sessions emphasized that sustainable digital sovereignty requires substantial investment in local technological capabilities, infrastructure development, and human capital, alongside appropriate data governance frameworks.
When tackling dis/misinformation and other types of harmful content, how do we move away from over-emphasising technical solutions, and focus more on addressing underlying societal issues fueling the spread of such content?
The question of moving away from over-emphasizing technical solutions when tackling disinformation and harmful content, toward addressing underlying societal issues, was extensively discussed across multiple IGF 2025 sessions. The conversations revealed a strong consensus among participants that purely technical approaches are insufficient and that comprehensive, multifaceted strategies are needed.
The Limitations of Technical-Only Approaches
Several speakers highlighted the inadequacy of purely technical solutions. In the Open Forum on AI and Disinformation, David Caswell emphasized that “this attempt to put some order on the information environment has not been successful” and advocated for systems-level approaches rather than case-by-case technical fixes.
From a regulatory perspective, Bia Barbosa in WS #133 on Platform Governance discussed how the Brazilian approach focuses on “regulation of process, algorithms and content moderation mechanisms rather than individual content, in a perspective close to the concept of addressing the systemic risks of this service.”
Addressing Root Causes and Societal Issues
Multiple sessions emphasized the importance of addressing underlying societal vulnerabilities. In the High Level Session on Information Space, Minister Lubna Jaffery highlighted that “A part of strengthening resilience to disinformation campaign is an inclusive and a just society. This facilitates trust, stability, and the abilities of citizens to take part in an open and informed public discourse.”
The Open Forum on Protecting Refugees provided a concrete example of how societal issues fuel misinformation. Mbali Mushathama explained that “South Africa is 31 years into its democracy… there are still significant gaps… high unemployment rates… limited public resources… where there are limited public resources, it can create a sense of competition… foreign nationals, including forcibly displaced persons, are oftentimes used as scapegoats for socioeconomic problems.”
Ecosystem and Multi-Stakeholder Approaches
The need for comprehensive ecosystem approaches was consistently emphasized. In the Open Forum on Climate Change Information Integrity, UNESCO’s Guilherme Canela stressed the importance of understanding “the ecosystem logic” and noted that “we couldn’t face the issue looking into just one of the actors.”
Charlotte Scaddan from the UN emphasized that “Our response has to be multifaceted and include prevention and mitigation measures across the information ecosystem. This includes strategic communications and advocacy, of course, political engagement, human rights-based policy, and community engagement.”
Education, Media Literacy, and Empowerment
Many sessions highlighted education and media literacy as crucial components, but emphasized they should be part of broader strategies rather than standalone solutions. In the Parliamentary Session on Digital Deceit, Camille Grenier noted that “media and information literacy and AI literacy training is crucial, but it is not a standalone answer to mis- and disinformation problem.”
The Host Country Open Stage featured Solve Kuraas Karlsen from Faktisk.no, who emphasized that “We don’t only need the media that can cover it and show it. We have our own newsroom, which is fact-checkers, that is checking out different things that is posted online. But we also need to empower the people to give them the tools.”
Community-Centered and Democratic Approaches
Several sessions emphasized the importance of community engagement and democratic participation. In WS #133 on Platform Governance, Janjira Sombatpoonsiri advocated for a “pluralist approach” that “moves beyond the legal sphere to encourage participatory fact-checking, structured pre-banking efforts, and community moderation initiatives.” She emphasized that “at the end of the day, it’s about creating democratic legitimacy for combating disinformation together.”
Promoting Quality Content and Supporting Independent Media
Beyond just countering false information, several speakers emphasized the importance of promoting quality content. Thibaut Bruttin in the High Level Session noted that “It’s not about chasing the bad. It’s not about taking down propaganda, which can equate censorship to some extent. It’s about also promoting the good, about rewarding journalism worthy of that name.”
The Parliamentary Roundtable on Safeguarding Democracy emphasized supporting independent media, with Grunde Almeland stating that “one of the key measures is supporting and strengthening independent media organizations.”
Addressing Gender and Cultural Issues
Some sessions highlighted how certain forms of harmful content require addressing deeper cultural and social norms. In WS #70 on Sexual Deepfakes, Juliana Cunha emphasized that “The misuse of DNA to create sexualized images of peers is not a tech issue or a legal gap. It’s a reflection of a broader systemic gender inequalities.”
Real-World Consequences and Violence
Several speakers emphasized how online misinformation leads to real-world violence, requiring comprehensive societal responses. In Parliamentary Session 5, Franco Metaza gave a concrete example from Argentina, describing how “a person who consumed so many hate messages appeared at the door of the house and shot him in the head.”
Moving Beyond Supply-Side Technical Solutions
What are the risks of over-relying on AI-powered content moderation systems in diverse cultural contexts? And how can they be addressed?
Risks of Over-Relying on AI-Powered Content Moderation Systems in Diverse Cultural Contexts
Key Risks Identified
Cultural and Linguistic Bias
Multiple sessions highlighted the fundamental issue of cultural bias in AI-powered content moderation systems. As noted in the Parliamentary Session on Digital Deceit, “At a fundamental level, the AI model that you see today are trained on English speaking data from the internet that is mainly generated in the global north or in the north. And what it means is that it really sort of aggregate a specific vision of the world with its inherent biases.”
The workshop on Generative AI in Content Moderation provided concrete examples of these biases, with speakers noting that “removing terrorist content in Arabic got it wrongly 77% of the time” and how “Instagram falsely flagged Al-Aqsa Mosque, which is the third holiest mosque in Islam as a terrorist organization.”
Language Diversity Challenges
Several sessions emphasized how AI systems struggle with linguistic diversity. In the workshop on AI Readiness in Africa, speakers highlighted that “Remember, we have more than 2,000 languages in Africa. So if we allow those languages to be trained on the AI system, but not trained by us, it basically means even indigenous people will be impacted by any cultural biases using AI.”
The Parliamentary Session on Protecting Vulnerable Groups further illustrated this challenge, noting that “in our context, for example, in Uganda, there are about 50 languages that are spoken, in Uganda alone, not considering the whole continent. And because these are smaller countries, they don’t have a huge market share, you know, on these online platforms. They’re often not prioritized.”
Cultural Sensitivity and Context Understanding
The Parliamentary Session on Digital Policy Practices highlighted critical cultural sensitivity issues, with speakers explaining that “So we need to be very careful of the fact that the culture in which we are living, the social media platforms have to be sensitive about that culture. And this is the real challenge, how to make the social media platforms sensitive to the cultures.”
Systematic Discrimination and Over-Moderation
The workshop on International Law in Digital Spaces documented systematic issues, noting “Systematic censorship and discriminatory content moderation policies by these platforms, as seen in the suppression of Palestinian voices, undermine these digital rights. And the disproportionate over-moderation leads to restrictions limiting the reach of Palestinian content at the international level.”
Proposed Solutions and Approaches
Regional Expertise and Local Context
The High-Level Session on Information Space highlighted the importance of regional expertise, with TikTok’s representative explaining: “Sitting in North America, I don’t know the right solutions for Estonia. I can’t see what’s coming around the corner. And so it’s critically important that we have these regional safety councils with people who do have that expertise and will bring it to us and share it with us so that we can try to solve problems before they find their ways onto the platform.”
Localized AI Development
The session on Ethical AI-Generated Content provided a concrete example of localized solutions, with speakers describing how they are “working on our own AI models that are designed to classify hate speech and violence on social media platforms and different chat platforms in two different languages, Hebrew and Arabic” with “words, terms, definitions and data labeling that reflect the specific region we live in and countries we work on.”
Community Participation and Human Oversight
The workshop on AI in Content Moderation emphasized community involvement, suggesting to “engage with greater community leadership and participation in the building of LLM” and ensuring that human feedback processes involve affected communities rather than just Silicon Valley experts.
Hybrid Approaches
Several sessions advocated for combining AI with human oversight. The High-Level Session noted that “When we pair technology with human moderation, we have found that we are able to remove violative content 98% of the time, proactively, before it’s reported to us as a platform.”
Multistakeholder Approach
The Parliamentary Session on Digital Deceit emphasized the need for “multi-stakeholder perspective, but also from different local and regional level to input diversity in every stage of the AI life cycle from the data to the outputs.”
Conclusion
The discussions across multiple IGF 2025 sessions revealed that over-reliance on AI-powered content moderation systems poses significant risks in diverse cultural contexts, including systematic bias, linguistic discrimination, and cultural insensitivity. The proposed solutions emphasize the need for localized approaches, community participation, human oversight, and multistakeholder engagement to address these challenges effectively.
What are the long-term implications of the growing role of private digital platforms in shaping public discourse and democratic processes?
Long-term Implications of Private Digital Platforms in Shaping Public Discourse and Democratic Processes
The Internet Governance Forum 2025 discussions revealed deep concerns about the growing power of private digital platforms and their impact on democratic governance. Multiple sessions highlighted how a small number of tech companies now control global information flow and democratic discourse.
Concentration of Power
A central theme across sessions was the unprecedented concentration of power in the hands of a few tech giants. In the Dynamic Coalition Collaborative Session, audience member Kjetil Kjernsmo stated that “the power of internet governance is not in this room. It is chiefly with big tech.” Dr. Rajendra Pratap Gupta reinforced this concern, noting that “it is small number of large companies that drive the internet rather than large number of small companies.”
The Parliamentary Session 4 emphasized this power imbalance, with Anna Luhrmann from Germany stating that “the actual power in the internet currently resides with the big tech companies that have in their respective fields quasi-monopolies in many areas.”
Threats to Democratic Processes
Multiple sessions documented how platforms pose existential threats to democratic institutions. In the Platform Governance and Duty of Care Workshop, Beatriz Kira identified critical threats including “disinformation campaigns that consume elections, hate speech that spills from screens into the streets and real-world harms, and co-ordinated harassment that could silence marginalised voices.”
The Closing Ceremony featured Maria Ressa’s stark warning about platforms’ transformation from democratic tools to weapons of oppression. She observed that “In my own country, in the Philippines, I’ve watched social media transform from a tool of liberation into a weapon of oppression.” She emphasized that “Algorithms that amplify our worst impulses, that reward outrage over empathy, that trap us in bubbles of our own biases, these are not inevitable. They’re choices.”
Business Model Concerns
Several sessions identified platforms’ advertising-driven business models as fundamentally problematic for democratic discourse. In the Climate Change Information Integrity Open Forum, Harriet Kingaby explained how “advertising is essentially the funding model behind the attention economy” creating “unhealthy incentives for the production of content that has devastating consequences for information integrity.”
The Parliamentary Session on Digital Deceit reinforced this concern, with Camille Grenier noting that “big tech business models prioritize their… have laid out monetization for profit. And these business models create dependencies for private and public organizations, as well as individuals, and facilitate the weaponization of information.”
Impact on Media and Journalism
The discussions revealed how platforms undermine traditional journalism and media independence. In the High Level Session on Losing the Information Space, Minister Jaffery explained that “news media’s ability to perform their function as watchdogs and providers of reliable information are challenged by big tech platforms.”
The Information Integrity Workshop highlighted the scale of platform dominance, with Beatriz Barbosa noting that “two companies, Google and Meta, for sure, hold a dominant global position in the distribution of news and information.”
Algorithmic Manipulation and Content Moderation
Sessions examined how algorithmic systems shape public discourse in problematic ways. The Truth Under Siege event featured Pavel Zoneff explaining that “it’s really the issue of a handful of big tech companies that now mediate the flow of nearly all information and their algorithms. Their algorithms and platforms really rule and decide what is visible at scales.”
The AI Content Moderation Workshop warned about increasing homogenization, with Marlene Owizniak explaining that content moderation decisions “defined at the foundation level will also be replicated on the deployment one and really there’s even more homogeneity of speech as before.”
Impact on Youth and Democratic Participation
Several sessions highlighted particular concerns about platforms’ influence on young people’s political engagement. The Youth in Public Discourse Lightning Talk raised critical questions about democratic participation, with Katarina Juni Moneta asking: “So what happens to democracy when young people spend more time listening to influencers than to journalists and politicians?”
The High Level Session 5 documented specific harms, with Minister Tung noting concerns about platform algorithms where “Our kids are screaming for help because they are having trouble with sleep, with health issues, body issues, and so forth, because of these algorithms.”
Global South Perspectives
Discussions revealed how platform dominance particularly affects Global South countries. The Connectivity Gap Workshop highlighted “the regulatory power imbalances that now are growing between states and big tech companies. Countries of the global majority have today a minimum capacity to demand the fulfillment of guarantees and rights of these companies.”
Calls for Democratic Reclamation
Many sessions called for reclaiming democratic control over digital spaces. In the High Level Session 1, Thibaut Bruttin argued that <a href="https://dig.watch/event/internet-governance-forum-2025/high-level-session-1-losing-the-information-space-ensuring-human-rights-and-resilient-societies-in-the-age-of-big-tech-2?diplo-deep-link-text=It%27s%20a%20fantasy%20to%
How can we create more effective mechanisms for addressing cross-border content moderation issues without creating global content standards?
Cross-Border Content Moderation: Regional Cooperation and Coordination Mechanisms
The discussions across multiple IGF 2025 sessions revealed a consensus that effective cross-border content moderation requires regional cooperation and coordinated frameworks rather than uniform global standards. Several innovative approaches emerged from the deliberations.
Regional Bloc Approaches
A prominent solution discussed was regional coordination. In Parliamentary Session 3, Teo Nie Ching from Malaysia advocated for regional engagement: “I think it is very, very important for us to engage this platform as a bloc… Because as 10 ASEAN countries, we have similar culture, we understand each other better, and therefore we shall be able to set a standard that actually meets our cultural and also a history and religious background, etc.”
Network-Based Coordination Models
The European Digital Services Act framework provided a concrete example of effective cross-border coordination without global standardization. In Lightning Talk #107, John Evans explained: “when someone wants to complain about a platform that is established in Ireland, they need to make the complaints to their local Digital Services Coordinator and that’s transmitted to the Irish regulator” and emphasized that “digital regulation, it works best when there is coordination across countries.”
Similarly, the Global Online Safety Regulators Network (GOSRN) was highlighted as an emerging coordination mechanism. Arda Gerkens noted in Parliamentary Session 3: “We’re part of Global Online Safety Regulators Network, GOZERN. That’s a new initiative… Let’s see how we can tackle this problem, because it’s a global problem, and we need to work together here.”
Flexible Norms Over Rigid Standards
The preference for flexible approaches over rigid global standards was articulated in Parliamentary Session 2, where Paul Ash noted that “shared norms can travel way faster and be far more effective and more flexible than any individual statute” while emphasizing multi-stakeholder approaches.
Industry Coordination Challenges
A significant gap identified was the lack of cross-platform collaboration. In WS #70, Janice Richardson emphasized “the lack of cross-platform collaboration” and stated that “until industry joins up, I think it’s going to be very difficult to find a solution.”
Jurisdictional Complexity and Need for Frameworks
The complexity of dealing with different jurisdictions was extensively discussed in WS #106. Jacqueline van de Werken highlighted that “brand protection also locally differs because local laws are very different… So we look forward very much to sort of global framework because it’s criminal law, it’s data protection law, it’s also copyright law, everything is at stake.”
Acknowledgment of Limitations
Several speakers acknowledged the inherent limitations of cross-border mechanisms. In WS #70, Robert Hoving noted: “A global legislation answer might be difficult… But then you can use a VPN and you act like you come from another country. So that’s difficult.”
The discussions revealed that while creating effective cross-border content moderation mechanisms without global standards presents significant challenges, the emerging consensus favors regional coordination, network-based cooperation among regulators, flexible shared norms, and enhanced industry collaboration as the most viable pathways forward.
What frameworks can be developed – and by whom – to ensure the well-being of content moderators, addressing their mental health, ethical challenges, and the need for continuous support in working for a healthier digital environment?
Based on the available session transcripts from the Internet Governance Forum 2025, the specific question regarding frameworks for ensuring content moderator well-being was not directly addressed or discussed in any of the sessions. While various sessions covered related topics such as digital governance, AI regulation, online safety, cybersecurity, and digital cooperation, none specifically focused on the mental health challenges, ethical dilemmas, and support systems needed for content moderators who work to maintain healthier digital environments.
The sessions that came closest to addressing content moderation issues included discussions on combating sexual deepfakes, protecting vulnerable groups online, and building safe online environments, but these did not delve into the human cost and welfare concerns of those responsible for moderating harmful content.
This represents a significant gap in the IGF 2025 discussions, as content moderator well-being is a critical aspect of maintaining digital safety and requires urgent attention from policymakers, tech companies, and international organizations to develop comprehensive support frameworks.
What are the implications of over-emphasising the role of technology in achieving the sustainable development goals? How to ensure that the broader systemic challenges (social and cultural) are not neglected in the pursuit of technological advancements?
Over-emphasizing Technology in Achieving Sustainable Development Goals: Balancing Innovation with Systemic Challenges
The discussions across multiple IGF 2025 sessions revealed significant concerns about over-relying on technological solutions for achieving the Sustainable Development Goals while neglecting fundamental social and cultural challenges.
The Technology-First Problem
Several speakers highlighted the risks of prioritizing technological solutions without addressing underlying systemic issues. In WS #231, Franz von Weizsaecker acknowledged that while “digital transformation is indeed a mainstreaming topic across our entire portfolio of achieving all the sustainable development goals,” he also noted potential conflicts, stating “some of the goals are conflicting. So if you look at the climate goals, of course, we have a huge energy consumption and corresponding carbon emissions resulting from AI, from data centers, from digital infrastructures.”
Juan Carlos Lara in Open Forum #68 emphasized that “the STGs are not something that can be held by technologies, but commonly are” and stressed the importance of embedding “rights and obligations at the center of SDG-related strategies” rather than relying solely on technological solutions.
The Gap Between Global Debates and Local Realities
Sook Jung Dofel highlighted the disconnect between high-level technology discussions and ground-level challenges, noting that “global debates often miss local complexity.” She provided a concrete example: “At WSIS we discuss data governance, for instance, but the issue in some countries is how do we protect people’s data when there is no data protection authority?”
Cultural and Social Foundations Matter
Several speakers emphasized that technology cannot solve problems rooted in social and cultural structures. In WS #70, Juliana Cunha addressed the systemic nature of gender-based online violence: “The misuse of DNA to create sexualized images of peers is not a tech issue or a legal gap. It’s a reflection of a broader systemic gender inequalities.”
Monseigneur Ruiz in the Media Hub reinforced this point, stating that “if we think that the problem is technology and we want to resolve the problem from the technology, we’re wrong, the path.”
Human-Centered Approaches to Technology
John Kiariye from Kenya in Parliamentary Session 3 emphasized the importance of leveraging existing social structures: “even as we are focusing on the technology, let us not forget that this is technology for humans and there are already existing social setups.”
Avoiding Solution-First Approaches
In Open Forum #53, Aubra Anthony warned against rushing to implement AI solutions without proper problem analysis: “AI is one tool in the toolbox, and when you have this sense of urgency, that can both help drive the conversation of how we leverage those tools to suit our needs, but I think it also risks forcing us to adopt a solution that may not always match the problem.”
Recommendations for Balanced Approaches
Speakers provided several recommendations for ensuring systemic challenges are not neglected:
- Holistic Integration: Yu Ping Chan from UNDP in WS #466 emphasized that “You need to think about the entire developmental spectrum across all these issues and really tie digital and AI, digital transformation itself, as part of this holistic approach.”
- Evidence-Based Policy: Joanna Kulezsa advocated for “informed policymaking, research-based decision making” and warned that “Just because the new technology is there, it doesn’t mean it should be instantly allowed and used by the people.”
- Addressing Root Causes: Raoul Danniel Abellar Manuel emphasized that “sometimes the best way to solve a problem is to find the underlying basis for such a problem because directly confronting the problem may not be totally enough.”
The discussions revealed a clear consensus that while technology plays a crucial role in achieving the SDGs, success requires addressing underlying social, cultural, and economic structures rather than relying on technological solutions alone.
What is missing in our approaches to addressing the environmental impact of digital technologies?
What is missing in our approaches to addressing the environmental impact of digital technologies?
The environmental impact of digital technologies emerged as a significant concern across multiple sessions at IGF 2025, with discussions revealing several critical gaps in current approaches to addressing these challenges.
Lack of Integration in Digital Governance Frameworks
A fundamental issue identified was the absence of environmental considerations in digital policy frameworks. As noted in the ROAM X session, Fabio Senne observed that “environmental issues are largely overlooked in digital policies so far” and that “there are still issues such as energy consumption, e-waste and emissions that are not yet well integrated into the governance framework.”
Massive AI Energy Consumption and Water Usage
Multiple sessions highlighted the alarming energy and resource consumption of AI systems. In the AI for Sustainable Development session, Professor Rony Medaglia revealed that “to generate one single image with a large language model, such as chat GPT, uses the same amount of CO2 as charging your mobile phone up to 50%” and that “the global AI demand may be accountable in two years from now for a water withdrawal equal to six times the whole country of Denmark for a whole year of water.”
Lack of Transparency and Measurement
A critical gap identified was the absence of transparent reporting on environmental impacts. In the Make Your AI Greener workshop, Ioanna Ntinou emphasized that “we should start by having some transparency in the way we report energy” and called for “developing evaluation standards at prompt level.”
This transparency deficit was further highlighted in the AI at a Crossroads session, where Ana Valdivia noted that “84% of widely used large language models provide, no disclosure at all of their energy use or emissions.”
E-Waste Management Crisis
The E-Waste Management session revealed significant gaps in waste handling, with Emmanuel Niyikora noting that “we have 32%, which is formally collected and recycled. So which means the remaining 78 remain unmanaged.” A participant emphasized the measurement challenge: “We can’t manage what we can’t measure. And right now, the global e-waste system is mostly blind.”
Climate Justice and Global Inequality
Environmental impacts disproportionately affect the Global South. In Main Session 2, Anna from R3D Mexico highlighted how companies move data centers to avoid environmental scrutiny: “when it has established hyperscale data centers in places like the Netherlands that had made people publicly pressured for them to stop being constructed there, so then you move them to global south countries” causing “all the issues with extractivism, with hydric crisis, with pollution arrive to other communities.”
Need for Knowledge-Based Governance
The Sustainable Digital Growth session identified fundamental research gaps. Jan Eivind Velure emphasized that “the governance must be knowledge-based” and called for comprehensive lifecycle analyses to establish “a shared reference point that we can start talking around.”
Misaligned Incentives and Investment Priorities
The Local AI Policy Pathways session highlighted problematic investment patterns, with Anita Gurumurthy noting that “Between 2022 and 2025, AI-related investment doubled from $100 to $200 billion. By comparison, this is about three times the global spending on climate change adaptation.”
In the Make Your AI Greener workshop, Mark Gachara observed that “there’s a lot of money going into climate or smart agriculture or climate AI solutions But it’s to make them more efficient more effective as opposed to how do we mitigate? The harms that they are causing.”
Children’s Environmental Concerns
Young people demonstrated heightened environmental awareness. In the Elevating Children’s Voices session, Dr. Mhairi Aitken reported that “Where children have awareness or access to information about the environmental impacts of generative value models, they often choose not to use those models.”
Need for Active Intervention
Multiple sessions emphasized that environmental sustainability will not emerge naturally. In the Make Your AI Greener workshop, Ioanna Ntinou stressed that “sustainable AI is not going to emerge by default. We need to incentivize and it needs to be actively supported.”
The discussions revealed that addressing the environmental impact of digital technologies requires comprehensive policy integration, transparent reporting mechanisms, better measurement systems, climate justice considerations, knowledge-based governance frameworks, aligned incentives, and active intervention rather than relying on market forces alone.
What innovative strategies could be used to raise public awareness about the environmental and health impacts of e-waste and encourage more responsible disposal practices?
Innovative Strategies for E-Waste Awareness and Responsible Disposal
The discussions across multiple Internet Governance Forum sessions revealed several innovative approaches to raising public awareness about e-waste and promoting responsible disposal practices, though the topic received limited dedicated attention in most sessions.
Education-Based Approaches
A key theme emphasized the importance of early education. In the Taking Stock session, an Executive Secretary highlighted e-waste as a ‘silent crisis with environmental consequences and societal consequences’ and advocated for ‘responsible waste management from the primary school’, emphasizing that ‘durability on these strategies needs to lean on changing the behaviors, especially starting from education.’
Technology-Enabled Solutions
The Building Digital Policy for Sustainable E-Waste Management workshop presented several innovative technological approaches. Hossam El Gamal introduced the ‘Dr. Wee’ initiative which ‘uses a smartphone app to incentivize e-Waste collection and facilitate proper dismantling and sorting.’
Flexible Collection Systems
Jasmine Ko from Hong Kong described innovative mobile collection strategies, explaining that ‘we at the Hong Kong government trying to have some flexible hour on some mobile station. So not just in a physical store… we actually have a truck to go around different district in B.C. office CBD area so that to cater people who really want to do recycling during the office hour or the lunch break.’
Digital Product Passports
The same session also highlighted the development of digital product passports, with participants working on ‘open source implementations of digital passports for electronics’ to lower entry barriers for innovation.
Circular Economy and Right to Repair
Several sessions touched on broader circular economy approaches. In the Bridging the Connectivity Gap session, Onica Makwakwa emphasized the right to repair as crucial, stating: ‘It really baffles me that we are also talking about climate issues and e-waste and sustainability of the planet. But we still have so many countries where the right to repair is not practical and a reality for these devices.’
The Innovative Regulatory Strategies session highlighted device refurbishment initiatives, with Leandro Navarro from eReuse explaining how ‘there are devices which are no longer used by one owner, but still they have a lot of years of life span. And then, I mean, refurbish them, refurbish them and collecting them and giving them to people that need a device’ can help both connectivity and environmental goals.
Mindset and Behavioral Change
The Sustainable Digital Growth session emphasized the importance of changing consumer behavior, with Minister Karianne Tung noting that ‘to work with the citizens and their mindset on how they use things are important’ for addressing the fact that ‘most emission from digitalization is not from data centers but they are from mobile phones, iPads and like hardware.’
Overall, while e-waste received limited attention across most sessions, the discussions that did occur highlighted the need for multi-faceted approaches combining education, technology, policy innovation, and behavioral change to address this growing challenge.
How can we ensure that efforts to create safe online spaces for children don’t infringe on their rights to privacy and free expression?
No relevant discussions found.
How can we design and enforce gender-responsive laws and legal frameworks that effectively protect women from online harm while promoting their digital rights and participation?
Designing and Enforcing Gender-Responsive Laws and Legal Frameworks for Women’s Online Protection
The discussions across multiple IGF 2025 sessions revealed significant challenges and emerging solutions for protecting women from online harm while promoting their digital participation. The conversations highlighted both the urgent need for comprehensive legal frameworks and the complexity of implementing effective protections.
The Scale of the Challenge
Sessions consistently emphasized the growing nature of online gender-based violence. In Parliamentary Session 3, Nighat Dad shared alarming statistics from Pakistan: “Since 2016, through our digital security helpline, we have dealt with more than 20,000 complaints from hundreds of young women every month, female journalists, now more from women influencers and content creators, women politicians, scholars, and students.” The digital divide compounds these challenges, as noted in High Level Session 5 by Baroness Jones: “Globally there were 244 million more men than women using the internet in 2023.”
Cultural Sensitivity in Legal Design
Multiple sessions emphasized the critical importance of cultural context in designing gender-responsive laws. In Parliamentary Session 5, Anusha Rahman Ahmad Khan from Pakistan provided a stark example: “And in my country, even an aspersion on a girl is good enough to kill her. So even if they would not die physically, but they are dead emotionally. So we need to be very careful of the fact that the culture in which we are living, the social media platforms have to be sensitive about that culture.” This sentiment was echoed in Open Forum #17, where Amira Saber highlighted deepfake risks: “Imagine a girl who is living in a village with certain cultural norms and she has leaked photographs of her on pornography. This might threaten her life directly. She could be killed.”
Challenges with Existing Legal Frameworks
Sessions revealed significant problems with current approaches to gender-responsive legislation. In Parliamentary Session 3, Neema Iyer highlighted a critical paradox: “the laws that do exist, especially in our context, have actually been weaponized against women and marginalized groups. So many of these, you know, cybercrime laws or data protection laws, have been used against women, have been used against dissenting voices, against activists, to actually punish them rather than protect them.” She further noted that “legislative frameworks are often too narrow. They, you know, they focus on takedowns or criminalization, or they borrow from Western contexts, but they don’t really meet the lived realities of women.”
Enforcement Challenges
Even where appropriate laws exist, enforcement remains problematic. In Networking Session #232, speaker Oni Kamakwakwa identified a critical gap: “we always have really great policies, cybersecurity laws that criminalize the behavior. However, there is no knowledge of how to enforce it, including within the judiciary as well… we have women who continuously get harassed, including serious cases of revenge porn, where even the police are laughing at the incidents because they don’t realize that, in the same way you see someone speeding down the highway and you chase them because they’ve broken a law, this is how we expect judiciary clusters to actually respond to these cases.”
Judicial challenges were further explored in WS #190, where Marin Ashraf noted: “Online gender-based violence is typically dealt as a criminal offense in India under the General Penal Code and the Information Technology Act… In our study, we found that in several cases of online gender-based violence, it unfortunately failed to meet the high burden of proof that is required because of difficulties in bringing in expert testimony, relying on witnesses, and ensuring the admissibility of digital evidence.”
Emerging Technologies and New Challenges
The rise of AI and deepfake technology presents new challenges for legal frameworks. In WS #70, Yi Teng Au explained the legal complexity: “in many countries, the laws against deepfakes is very muddy because it’s a picture that is modified. Although it’s a picture of a face, the body, it’s not. So a lot of the times, it’s very hard to persecute the perpetrators.” Juliana Cunha emphasized the gendered nature of this issue: “This is a problem about gender-based violence and it’s just another form of it” reflecting “broader systemic gender inequalities.”
Promising Legislative Developments
Several sessions highlighted positive legislative developments. In Parliamentary Session 3, Raoul Danniel Abellar Manuel described Philippine efforts: “we recently had the Republic Act 11930… aside from content taken one major component of this is the assertion of extraterritorial jurisdiction” and “the House of Representatives, on its part, approved the expanded anti-violence bill. It defines psychological violence, including different forms, including electronic or ICP devices.”
The UK’s approach was mentioned in WS #70, where Kenneth Leung noted the <a href="https://dig.watch/event/internet-governance-forum-2025/ws-70-comb
What are the potential negative consequences of framing digital rights primarily in terms of individual liberties, potentially overlooking other rights and responsibilities?
Potential Negative Consequences of Individual-Centric Digital Rights Framing
Several sessions at the Internet Governance Forum 2025 addressed the limitations and potential negative consequences of framing digital rights primarily through an individual liberties lens, highlighting the need to consider collective rights, public interest, and broader social responsibilities.
The Dominance of Individual Rights Over Collective Interests
The discussion revealed a clear pattern where individual rights have been prioritized at the expense of collective and public interest considerations. At the New Technologies and the Impact on Human Rights session, Allison Gilwald noted that “the individual rights have taken preference over maybe collective rights or public interest rights” and emphasized “extending those rights to look at some of the kind of collective implications of that.”
Invisibility of Non-Users and Affected Communities
One significant consequence of individual-focused rights frameworks is the exclusion of those who are indirectly affected by digital technologies but are not direct users. Anita Gurumurthy emphasized that “we need to account for those who are non-users. People who mine rare earth minerals, those whose lands are given away to data centers, who are dispossessed.”
Need for Collective Data Rights
The limitations of individual data rights were particularly highlighted in discussions about data governance. At the Local AI Policy Pathways session, Sarah Nicole argued that “The question of having a stake in your data has often been framed on a personal level… So the answer will not be on an individual perspective, but it would be on a collective one.” She emphasized that “the mentality really needs to shift from this personal data frame discussion… to a more collective and organization perspective.”
Consumer vs. Citizen Rights Framework
Another critical consequence identified was the reduction of people from citizens with rights to consumers in a marketplace. At the Innovative Regulatory Strategies session, Dr. Gillian Marcelle observed that “people are being treated like consumers in a neoliberal framing rather than citizens who have the right to communication and the rights to access.”
Balancing Competing Rights
The challenge of balancing individual privacy rights with other human rights was discussed at the Advancing Data Governance session. Joseph from Wikimedia Foundation asked about “how, through this process of harmonizing regional data protection laws and implementing such new laws, how we can ensure that all human rights are respected throughout this process and that the right to privacy does not come at the expense of any other potential right.”
Digital Solidarity vs. Individual Sovereignty
The concept of moving beyond individual digital rights toward collective approaches was further explored through discussions of digital solidarity. At the AI at a Crossroads session, Ana Valdivia advocated for “digital solidarity” rather than individual digital sovereignty, stating “rather than talk about digital sovereignty, that creates sort of like frictions between states, because all the states in the world want to become digital sovereign, we should talk about digital solidarity.”
Expanding to Economic and Environmental Rights
The discussions also highlighted the need to move beyond first-generation civil and political rights to include economic and environmental considerations. Allison Gilwald emphasized “extending that if one’s really concerned about redressing the current inequalities we see from a rights perspective is shifting that to second- and third-generation rights. So to economic and environmental rights.”
How can we move beyond the binary framing of ‘digital rights vs. security’ in discussions about encryption and lawful access?
No relevant discussions found.
How can we create comprehensive and effective governance frameworks for brain-computer interfaces and neurotechnology that adequately address ethical and privacy concerns? And how ensure that such frameworks are diligently implemented?
Governance Frameworks for Brain-Computer Interfaces and Neurotechnology: A Summary of IGF 2025 Discussions
The question of creating comprehensive and effective governance frameworks for brain-computer interfaces and neurotechnology received minimal attention across the Internet Governance Forum 2025 sessions, with only brief mentions in a few sessions and no detailed discussions on specific governance frameworks or implementation strategies.
Limited Recognition of the Technology’s Potential
Brain-computer interfaces were acknowledged for their transformative potential in High Level Session 3: AI & the Future of Work, where Ishita Barua highlighted their medical applications: “And perhaps most astonishing, brain-computer interface in combination with AI. With the help of these devices, people with paralysis can regain the ability to move, speak, and even write through direct decoding of brain activity.” However, this mention focused solely on the technology’s benefits without addressing governance concerns.
Emerging Policy Considerations
The intersection of AI and neurotechnology was briefly referenced in Open Forum #82: Catalyzing Equitable AI Impact, where Mariagrazia Squicciarini noted ongoing policy work: “The latest recommendation that has been worked on is about neurotechnologies and the impact they have on rights, on the people… And the special attention is also put at the crossroad between AI and neurotech, because that’s where the biggest impact may be on societies.”
Fundamental Rights Implications
The broader implications for fundamental rights were touched upon in WS #395: Applying International Law Principles in the Digital Space, where Nieves Molina discussed the evolving nature of freedom of thought in the digital age: “the issue of freedom of thought now takes a different connotation because technology actually my advance to that point that you that what you think can be reachable even if you don’t announce it as technology will advance and we will have to take decisions on whether the legislation that we have covers.”
Policy Gap Identified
The overwhelming absence of substantive discussion on brain-computer interface governance across the 30+ IGF sessions reviewed reveals a significant policy gap. Despite the technology’s profound implications for privacy, human rights, and societal transformation, the forum provided no concrete proposals for comprehensive governance frameworks or implementation strategies. This lack of attention suggests that neurotechnology governance remains an emerging priority that has yet to receive adequate focus in international digital governance discussions.
How can we enhance data collection efforts to better capture the diversity among persons with disabilities, ensuring the development of more accurate and inclusive policies and interventions?
The question of enhancing data collection efforts to better capture diversity among persons with disabilities was discussed across several sessions at the Internet Governance Forum 2025, with the most comprehensive insights provided in disability-focused sessions.
In WS #69 Beyond Tokenism Disability Inclusive Leadership in IG, Dr. Derrick Cogburn provided the most detailed framework for disability data collection, highlighting existing resources: “we have two really, really good sources of disability data. One is called the Disability Data Initiative, which is led by Fordham University, and the Disability Data Hub, led by the World Bank. Both of these data sets, as well as the text data, provides tremendous data for us to be able to analyze how persons with disabilities are faring in this current period.” He emphasized that “most of this data is open data” and stressed the importance of “continuous capacity development in research capacity” to effectively utilize these tools.
A practical example of inclusive data collection was shared in Open Forum #21 Leveraging Citizen Data for Inclusive Digital Governance, where Omar Seidu from Ghana Statistical Service described their comprehensive approach: “And understanding the key stakeholders’ interest for engagement. That is very important because once you engage, in fact, in developing this, there are 12 different domains of persons with disabilities in Ghana. And we make sure we engage with all these different domains. And that helps us to develop the functionalities for persons who are partially blind, for persons who have hearing impairment and all that, to engage with the application.”
The importance of disability status as a segmentation factor in data collection was acknowledged in Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring, where Pria Chetty noted: “And then, of course, in our data, we have that valuable demographic segmentation. So that’s by gender, age, income level, education, location. Anika mentioned to include the peri-urban category in there, but also disability status and language as well.”
In Open Forum #82 Catalyzing Equitable AI Impact, the need for inclusive representation was highlighted, with Nupur Chunchunwala emphasizing that “humans are diverse. We have an aging population over 10 percent that’s going to get impacted. We have, of course, gender. We have ability in terms of disabilities that are coming on and a large population of neurodiverse individuals.”
The discussions revealed that effective data collection for persons with disabilities requires comprehensive stakeholder engagement across different disability domains, the utilization of existing open data sources, investment in research capacity development, and the integration of accessibility features in data collection platforms to ensure meaningful participation of all disability communities.
What is next in national, regional and global efforts regarding taxation in the digital economy?
Based on discussions across multiple IGF 2025 sessions, several key directions emerge for national, regional, and global efforts regarding taxation in the digital economy.
Regional Tax Optimization Initiatives
The African Union is leading significant efforts in digital taxation reform. At the African Union Open Forum, Maktar Sek from UNECA revealed comprehensive regional work: “Why we have developed one platform, tax calculator, to review the taxation in the ICT sector. I think we can share the link at the screen and this taxation has been conducted in 54 member state.” The focus is on optimization rather than revenue maximization, with Sek emphasizing that “Optimizing ITC tax can increase not only the GDP, but we have seen an increase according to our statistic of the broadband connectivity as well as the job creation.”
Device Taxation Reform for Digital Inclusion
A critical area for national efforts involves reducing taxation on digital devices to improve affordability. At the Connectivity Gap workshop, Onica Makwakwa highlighted the impact of device taxation: “Looking at affordability specifically one of the things that we’ve done successfully in a couple of countries is looking at taxation of devices and found we found that there’s anywhere from 20 to 40 percent of Taxation that is on devices whether it’s an import duty tax your VAT Or you know sales tax or what have you but there’s anywhere from 20 to 45 percent And we’ve been able to demonstrate actually that if governments could just roll back some of those taxes It actually increases uptake of digital technologies within the country.”
Digital Services Taxes and Platform Taxation
Multiple sessions addressed emerging models for taxing digital platforms. At the Editorial Media session, Anya Schiffrin outlined recent developments: “Late 2024, Australia announced that platforms that didn’t want to negotiate with publishers could pay a digital levy, which would be more expensive than what bargaining code payments would be. South Africa is also looking at digital levies. And many countries are considering things like digital services taxes with funding earmarked for journalism.”
Cross-Border Fiscal Cooperation
Global coordination efforts are gaining momentum. At the WSIS+20 dialogue, Juan Carlos Lara emphasized the need for international fiscal measures: “Much of this work requires funding, sustainable digital cooperation requires public investment and ad hoc funding may not do that or provide that in a sustainable way. So to link the financing for development process, to acknowledge the need for cross-border fiscal measures is also relevant here, including equitable taxation of digital services and other ways to generate resources for digital infrastructure.”
Digital Development Tax Proposals
New concepts for addressing digital inequality through taxation are emerging. At the AI Governance session, William Bird mentioned recent proposals: “The Global Digital Justice Forum, in fact, just made some of its submission to the WSIS Plus 20, and one of the things that they’re calling for is a digital development tax that should be imposed on these entities in order to fundamentally address this inequality.”
International Coordination Challenges
Future efforts face significant geopolitical challenges. Schiffrin raised critical questions about international cooperation at the Editorial Media session: “Can Europe stick together? In the US, those of us who care about this stuff want to know whether the EU and the rest of the world will cave, capitulate, or whether it will stick with its plans to tax and regulate big tech.”
Could a middle-ground solution be found between the efforts to advance global digital trade agreements and the call to address more immediate challenges, such as bridging digital divides and promoting data fairness?
Based on the meeting transcripts analyzed, the question of finding a middle-ground solution between advancing global digital trade agreements and addressing immediate challenges like digital divides and data fairness was only partially discussed in one session.
In WS #259 Multistakeholder Cooperation in Era of Increased Protectionism, speakers touched on both aspects of this challenge. Milton Mueller raised concerns about digital free trade and data movement across borders, specifically questioning WTO negotiations and multistakeholder participation. Flavia Alves highlighted the complexity of balancing different priorities, stating that “data flows is also not only an economic issue, but it’s also a privacy and security issue, that we need to be careful and balance how we do the safeguards on privacy at the same time on law enforcement and others.”
Regarding the more immediate challenges, Tatjana Trupina emphasized the urgency of addressing basic connectivity issues, noting that “the original WSIS goal was connectivity, we still have one third of the world not connected, the WSIS has to deliver, we have to strengthen its implementation to address the current and emerging digital divides.”
While this session provided some insights into the tension between advancing digital trade and addressing fundamental digital divides, no concrete middle-ground solutions were explicitly proposed or discussed in detail across the analyzed sessions.
How can we create meaningful accountability mechanisms for big tech companies that go beyond fines and actually drive changes in corporate behaviour?
Creating Meaningful Accountability Mechanisms for Big Tech Companies Beyond Fines
The discussions across multiple sessions at the Internet Governance Forum 2025 revealed widespread consensus that traditional fines alone are insufficient to drive meaningful changes in big tech corporate behavior. Speakers emphasized the need for more innovative and comprehensive accountability mechanisms.
Structural and Regulatory Approaches
Several sessions highlighted the need for structural changes beyond monetary penalties. In the competition rights workshop, Camila advocated for “bolder theories of harm” and suggested “breaking up companies” as a solution, stating “as we see that they have an unmeasurable impact in our lives, maybe the solution is that they didn’t have to be that big.”
The child safety high-level session emphasized that regulation combined with enforcement works, with Thibaut Kleiner noting “You cannot just count on the goodwill of companies that are making profits to change their features unless they have really some pressure also coming from the regulators.”
Transparency and Due Diligence Requirements
Multiple sessions emphasized transparency as a key accountability mechanism. In the AI policy forum, Wai Sit Si Thou advocated for “a public disclosure accountability mechanism that could reference the ESG reporting framework” with “public disclosure on how this AI works and its potential impact.”
The human rights session highlighted the need for mandatory human rights due diligence, with Peggy Hicks stating “we do need to to create the right types of both incentives and disincentives For companies to actually, you know Do the risk assessment that needs to happen.”
Innovation in Financial Accountability
Beyond traditional fines, speakers proposed creative financial mechanisms. In the media dependency session, Anya Schiffrin suggested requiring platforms to “post a billion dollar bond before they start operations in countries like Kenya or Brazil, so that when there’s a fine, they have to pay it.”
The main session discussed social offset mechanisms similar to carbon offsets as an innovative accountability approach.
Dialogic and Collaborative Approaches
The German regulator session presented a “dialogic regulation” model where Michael Terhorst explained: “we don’t just, yeah, send a letter saying, okay, you have to, I don’t know, pay €5 million because your platform isn’t safe. We get in touch with the provider and say, okay, we found some deficits on your platform.”
The parliamentary session on vulnerable groups advocated for collaborative design, with Neema Iyer suggesting “what would be lovely in a really perfect world would be if these algorithmic decisions are co-created by all of us.”
Legislative and Parliamentary Engagement
Parliamentary sessions emphasized direct engagement with tech executives. In the cybercrime parliamentary session, Pavel Popescu advocated for “bringing platform CEOs before parliamentary committees” to ask them “very complicated and direct questions.”
International Cooperation and Collective Action
Multiple sessions emphasized the need for coordinated global action. The African cybercrime forum highlighted regional coordination, with Senator Besalisu advocating that “unless and until all of those big texts know that if you violate the law in South Africa, your sanction is not limited to South Africa.”
The parliamentary cybercrime session discussed creating “for big tech consequences at the global level, at the UN level” including potential binding treaties.
Investor and Market-Based Mechanisms
The conflict accountability session explored investor pressure as an accountability tool, with Kiran Aziz explaining how investors can exclude companies from portfolios, providing “a quite thorough exclusion document and this is a way to hold companies accountable but this is also to help other investors to get insight about where we draw the line.”
Public Procurement as Leverage
The cloud autonomy session highlighted public procurement as a powerful tool, with Agustina Brizio noting that “public procurement is amazing level here” and advocating for governments to include “in every public procurement contract things as data localization, as having over standards, as having a more transparent governance.”
Civil Society and Multi-Stakeholder Oversight
The digital rights partnership session emphasized civil society’s watchdog role, with Ian Barber noting that “civil society can play a key role…serving as kind of a watchdog or an observer” and that “a lot of this comes down to transparency, openness, and decision-making in the processes.”
Key Challenges and Gaps
Despite these various approaches, significant challenges remain. The <a href="https://dig.watch/event/internet-governance-forum-2025/day-0-event-255
Can digital trade provisions in international agreements be designed in a way that facilitates international trade while also preserving domestic policy space for regulating the digital economy?
The question of whether digital trade provisions can facilitate international trade while preserving domestic policy space received limited but significant attention across several IGF 2025 sessions, with most discussions highlighting the inherent tensions rather than offering concrete solutions.
The most comprehensive discussion occurred in the Policy Network on Internet Fragmentation (PNIF), where Marilia Maciel identified this as a core challenge, noting that “one of the main elements that has traditionally strengthened fragmentation trends is the tension between the cross-border nature of the internet and our territorially grounded political and legal systems.” She also criticized the current approach, stating that “the IGF has not really responded over the last 10 years to the significant migration of digital policy discussions, data flows, algorithms, privacy, AI, to digital trade negotiations and agreements, which are, by the way, not transparent, not accountable, not multi-stakeholder.”
The tension between sovereignty and global connectivity was further explored in WS #259 on Multistakeholder Cooperation, where Tatjana Trupina acknowledged that “There is a tension, especially in the current geopolitical climate, between sovereign states and their borders, and them trying to navigate this climate. And tension between states and sovereign borders and the open, interoperable, and globally connected Internet.” She explained that states have “very valid concerns, for example, about security, safety of their citizens, online harms, crime, as well about their autonomy” but addressing these concerns “could harm the global interoperability and connectivity, perhaps sometimes even inadvertently.”
Concerns about the dominance of trade frameworks over democratic governance were raised in multiple sessions. At the WGIG+20 event, Nandini from IT4Change India expressed concern about “in recent years we have seen a lot of digital governance issues, data governance issues in particular, being taken out of a democratic space and into very closed-door multilateral spaces such as digital trade negotiations.”
A practical example of this tension was provided in the Open-source AI forum, where coordination challenges were highlighted: initiatives were “completely, you know, prevented by the African Continental Free Trade Area digital protocols, which are cut and paste from WTO, and so, you know, trade always trumps everything else.”
While the discussions identified the fundamental tensions between facilitating digital trade and preserving domestic policy space, no clear solutions were proposed for designing digital trade provisions that could effectively balance these competing interests.
How do we ensure that efforts to regulate the digital economy don’t inadvertently entrench the market power of dominant platforms?
No relevant discussions found.
Are there risks associated with relying too heavily on self-regulation and corporate social responsibility in addressing tech-related societal challenges? If so, how do we address them?
Risks of Over-Reliance on Self-Regulation and Corporate Social Responsibility
The discussions across multiple IGF 2025 sessions revealed significant concerns about relying too heavily on self-regulation and corporate social responsibility to address tech-related societal challenges. A clear consensus emerged that self-regulation has largely failed across various domains, from child safety to content moderation to platform accountability.
Evidence of Self-Regulation Failures
Multiple speakers provided concrete evidence of self-regulation failures. In High Level Session 4, Leanda Barrington-Leach definitively stated that “Self-regulation has not worked and good regulation does work.” She further emphasized that “whistleblower reports and leaked internal documents show how time and again tech companies are aware of the harm they are causing children and choosing to do it anyway.”
In Main Session 2, Jhalak Kakkar provided historical context, noting that “we’ve also seen that companies have never been particularly adept at only working under the realm of self-regulation. I mean, whether, and this is across industries, I’m not only pointing to tech, you know, we’ve seen that time and time again over the last 150 years.”
Specific Examples of Self-Regulation Inadequacies
Several sessions provided specific examples of self-regulation failures. In Day 0 Event #255, Marwa Fatafta criticized inadequate corporate self-assessments, explaining how “Microsoft had recently issued a statement after a year and a half of public mobilization… in which they said, well, we conducted an audit to see whether our technologies have contributed to harm or targeting of civilians in the Gaza Strip. And while we don’t have an insight into how our technologies are used by Israel, especially in air-gapped military bases, we concluded that we have not contributed to any harm.”
In Global Youth Summit, Brendan Dowling explained Australia’s rationale for government intervention, stating that “the response of social media companies has been lacklustre at best, disingenuous at worst.”
Structural Problems with Self-Regulation
Several speakers identified fundamental structural problems with self-regulation. In Lightning Talk #109, Michael Terhorst highlighted the objectivity problem, noting that “when we look at some voices saying that there is already enough. We hear from some people that when we look at social media platforms for example that providers already do a lot to protect the personal integrity of minors. Those voices are from the providers themselves. So, maybe it is not really objective.”
In NRI Collaborative Session, Dennis Broeders explained the fundamental conflict of interest: “I think the general rule is when industries start saying, no, we’ll do it ourselves, we’ll self-regulate, we’ll do ethical framework, we’ll do all these things, that means they’re afraid of regulation, right, that’s the only reason they’re stepping forward to do this.”
Power Imbalances and Resource Disparities
A critical concern raised was the significant power imbalance between tech companies and regulatory bodies. In Open Forum #48, Leanda Barrington Leach highlighted that “There is very little knowledge, there is very very little capacity and as I mentioned before there is a huge amount of resources on the other side.” She emphasized there is “a massive massive imbalance of power and resources between in this case the tech sector.”
Solutions and Approaches
Multiple sessions discussed solutions to address the failures of self-regulation. In Main Session 2, Jhalak Kakkar emphasized the need for “external regulators, we need communities to be engaging, a bottom-up approach, civil society to be engaging, multilateral institutions to be coming in.”
In High Level Session 1, Thibaut Bruttin argued that “Governments should not be afraid of regulating and legislating” and that their “responsibility is to build the framework, not to go into the nitty-gritty details of everything happening in the media field, but really building this framework that enables media to flourish.”
In Day 0 Event #252, Tawfik Jelassi concluded that “Self-regulation did not work, did not deliver. We need maybe a core regulatory system, which is truly multi-stakeholder.”
Balanced Approaches
Some speakers advocated for balanced approaches that combine regulation with incentives for responsible corporate behavior. In WS #106, Julija Kalpokiene noted that “if the right incentives exist, then self-regulation could be much better because the industry know what they’re able to do, so it can be more effective and better focused than the government approach.”
However, the overwhelming consensus across sessions was that effective regulation with enforcement mechanisms is essential to address the societal challenges posed by technology platforms and companies, as self-regulation alone has proven inadequate to protect public interests.
What are the implications of the growing role of military and national security interests in shaping global cybersecurity norms?
The Growing Role of Military and National Security Interests in Shaping Global Cybersecurity Norms
The discussions across multiple IGF 2025 sessions revealed significant concerns about the increasing militarization of cybersecurity and its implications for global digital governance. A central theme emerged around the transformation of cyberspace from a communication tool into a domain of warfare.
Cyber as a Domain of Warfare
The evolution of cyberspace into a military domain was explicitly addressed in WS #193, where Dr. Monojit Das explained that “cyberspace is no more just a tool of communication it’s a frontier of warfare after air, space, land, water, cyber is a frontier of warfare, domain of warfare rather” and noted the concerning escalation potential where “there are accepted definitions by some countries that mentions you know if at all a large-scale cyber attack is waged so it can be retaliated with a full-scale war.”
National Security and Digital Sovereignty
The Policy Network on Internet Fragmentation (PNIF) highlighted how national security concerns are driving internet fragmentation. Marilia Maciel observed “Today, I think particularly in developed countries, which are the front runners of digital technology, re-territorialization and digital sovereignty have been increasingly invoked to strengthen the state itself, especially when these expressions are associated with national security concerns and protectionist worldviews.” The discussion revealed “there’s a growing entanglement between national security and economic security and digital technologies cutting across both.”
Tech Industry Militarization
A particularly concerning development discussed in Day 0 Event #255 was the increasing militarization of tech companies. Marwa Fatafta detailed how “both Google and OpenAI have both quietly dropped their voluntary commitments earlier this year not to build AI for military use or surveillance purposes, signaling their readiness to deepen their ties with the arms industry.” She also noted that “senior executives from high-tech firms, specifically Meta, Open AI and Palantir, are joining the US Army Reserve at a new unit called Executive Innovation Corp.”
AI and Military Applications
The military applications of AI were extensively discussed in Open Forum #3, where Olga Cavalli noted that “The strategic and geopolitical importance of artificial intelligence in the military sphere has a central role in the wars of the future.” She warned that AI could “alter the balance of power between countries, generating risk of rapid escalation and preemptive action due to speed of reaction and perception of strategic advantage.”
Critical Infrastructure and State-Sponsored Attacks
Open Forum #45 addressed the targeting of critical infrastructure, with Pavel Mraz noting that “nearly 40% of all documented cyber operations by states have focused on critical infrastructure.” This targeting has prompted international efforts, as “states have called for reinforcing an international taboo against targeting these types of systems.”
Ransomware as National Security Threat
The elevation of ransomware from a cybersecurity to a national security issue was discussed in Day 0 Event #258. Ambassador Brendan Dowling explained the shift: “I think cybercriminals flourished in a context where we thought about ransomware as a cybersecurity issue that our CISOs or our ICT teams needed to be conscious of. But as we’ve seen the ramifications from ransomware attack resonate and ripple through society, I think increasingly we have to be conscious that these are not confined, they’re not purely cyber incidents, these are whole of nation incidents which governments need to take much more seriously.”
Balancing Security and Rights
The challenge of balancing national security with citizen rights was addressed in #205 Launch of the Global CyberPeace Index. Dr. Subi Chaturvedi emphasized: “In a war where you will always make sure that national sovereignty and security will take precedence, we have to ensure all of us come together and all of us work together to be able to create a multi-stakeholder environment where freedom of speech and expression are still held dear and we are able to balance citizen rights along with national security.”
The discussions revealed that the growing militarization of cybersecurity presents significant challenges for global digital governance, including risks of internet fragmentation, escalation of conflicts, and the erosion of civilian-oriented internet governance principles. The need for international cooperation and clear norms to prevent escalation while protecting citizen rights emerged as critical priorities.
What can be done to improve communication and coordination between technical and diplomatic communities in the cybersecurity domain?
Improving communication and coordination between technical and diplomatic communities in cybersecurity emerged as a critical theme across multiple sessions at IGF 2025, with speakers identifying both challenges and concrete solutions.
Breaking Down Silos and Building Understanding
The fundamental challenge of overcoming institutional silos was highlighted throughout the discussions. During the Open Forum on Critical Infrastructure, Marie Humeau specifically noted the need for “overcoming the silos between diplomatic and technical communities.” This sentiment was echoed in the Policy Network on Internet Fragmentation session, where the Internet Architecture Board representative emphasized: “So when you asked what could be the next step, I hope that when these regulations comes in, there is a trust in the technical community that yes, we can rely on their expertise that they have developed… So if there could be more collaboration, more understanding from each other’s point of view, that’s the hope that the technical community would have.”
Creating Regular Meeting Spaces
Several speakers emphasized the importance of establishing informal forums for ongoing dialogue. Lars Erik Smevold from the Critical Infrastructure forum advocated for more accessible venues: “It’s definitely important to collaborate more with the diplomats and diplomacy to get a better common understanding of what’s actually needed… So to have some arenas that we can actually meet, talk, not that formal in a way.” The value of in-person collaboration was reinforced by Dhruv Dhody from the Internet Architecture Board at the Closing Ceremony, who noted: “Being here in person also clearly gave an idea that this is such an important forum for not just communication, but real collaboration, for us to find clear pathways, for us in technical community to be part of discussions in policy, and similarly, policy discussions coming in in technical spaces.”
Innovative Partnership Models
Practical examples of successful coordination emerged from the discussions. In the ransomware accountability session, Chelsea Smethurst described Microsoft’s novel approach: “so just earlier this month Microsoft actually announced a pilot program with Europol to integrate our digital crimes investigators into their European cybercrime center in The Hague, and I think these sort of novel model public-private partnerships are an interesting thing to try out across different sectors, right? Because then you’re marrying both the private sector expertise and sort of the front lines that we see in ransomware with the legal and investigatory powers of states and governments.”
Capacity Building and Fellowship Programs
Educational initiatives were identified as crucial for bridging knowledge gaps. Christopher Painter emphasized comprehensive training in the Cyberdefense and AI session: “both at a technical level, so people understand the technology and how to use it, but it’s also at the policy level so that diplomats and others can debate these issues in these different forums intelligently.” Floreta Faber shared Albania’s experience of embedding “an experienced diplomat inside a technical organization” and mentioned programs like the Women in Cyber Fellowship as trust-building mechanisms.
Ensuring Meaningful Participation
The importance of substantive rather than tokenistic collaboration was emphasized throughout. Paul Ash stressed in the Parliamentary session the critical role of technical expertise: “One of the biggest issues we’ve seen over the last five to ten years is sometimes the sidelining of the technical community that Mallory described earlier. Those who’ve built the internet and know how it works. And even if we’re dealing with issues right up at the top of the content layer, those folk have a really important role to play in helping advise how to keep that process safe.” Francesca Bosca reinforced this point by emphasizing that “collaboration needs to be meaningful, not just tokenized. Collaboration is not just a buzzword that we need to have there, but it needs to make an impact.”
Multi-stakeholder Approaches
The discussions consistently highlighted the effectiveness of inclusive approaches that bring together diverse expertise. Joyce Chen from the internet fragmentation session described their work as “fighting internet fragmentation by building bridges between the technical and the non-technical realms.” The Global CyberPeace Index launch exemplified this approach, with Vineet Kumar explaining how “just as peace cannot be imposed by one actor alone, the index is built through a multi-stakeholder approach involving government and regulators, technology platform, civil society, and digital right advocates, academia, the technical community, and most importantly, the people whose lives are shared by these systems.”
Given the increasing use of AI in cybersecurity, how can we ensure that AI-driven security measures don’t inadvertently create or exacerbate vulnerabilities?
No relevant discussions found.
As end-to-end encryption becomes more widespread, how can we balance the need for privacy and security with the challenges it poses for combating child exploitation online? Are current proposals for ‘client-side scanning’ a viable solution or a dangerous precedent?
No relevant discussions found.
With the increasing complexity of supply chains in technology manufacturing, how can we effectively implement ‘security by design’ principles when multiple actors across various jurisdictions are involved in the production process?
No relevant discussions found.
How can we operationalise international norms on cybersecurity and critical infrastructure protection?
The operationalization of international norms on cybersecurity and critical infrastructure protection was extensively discussed across multiple sessions at IGF 2025, revealing both established frameworks and significant implementation challenges.
Existing Framework and Challenges
Several speakers emphasized that while international norms exist, their real-world implementation remains problematic. As discussed in the Open Forum on Critical Infrastructure, Pavel Mraz explained that “in order for the UN framework to have a real world impact and not remain just on paper, it must be operationalized nationally through legislation, institutional coordination, but also sustained investment in cybersecurity”.
The gap between established norms and practice was highlighted in the ransomware accountability session, where Julie Rodriguez Acosta noted that “while the framework says the reality on the ground tells us a different story”.
Multi-Level Coordination Requirements
Speakers consistently emphasized the need for coordination across multiple levels. Caroline Troein, in the critical infrastructure forum, stressed that coordination “needs to happen at the national regional and global levels”.
This multi-level approach was reinforced in the subsea cables protection session, where Minister Tung stated: “We need a combination of national, regional, and international cooperation to achieve effective resilience measures”.
Practical Implementation Mechanisms
Several concrete operationalization mechanisms were discussed:
Legal and Regulatory Frameworks
Giacomo Paoli Persi suggested in the ransomware session developing “model law or a model legislation for those countries that need to adopt some sort of regulatory measures”.
Crisis Communication Mechanisms
Pavel Mraz highlighted the importance of establishing crisis communication channels, noting in the critical infrastructure session that “countries are designating points of contacts globally for crisis communication in recognition that you cannot exchange business cards in a hurricane”.
Regional Cooperation Examples
Practical examples were shared, including Under-Secretary Syrjala’s mention in the subsea cables session of “the recent Memorandum of Understanding, which the Baltic Sea, NATO Allies and the EU have published”.
Implementation Challenges
Despite established frameworks, significant challenges remain. In the conflict and crises session, Dennis Broeders noted that “Implementation is really hard when it comes to negative norms”.
The Dynamic Coalition session highlighted that despite standards development efforts, “once these standards are developed not very many governments or even institutions non-government institutions even use them”.
Integration with International Law
The need for legal integration was emphasized in the Digital Emblem session, where Samit D’Chuna explained that “there need to be these common understandings of what the digital emblem is and how it’s respected and what happens when it’s not respected”.
The discussions revealed that while comprehensive international frameworks exist, successful operationalization requires sustained commitment across technical, legal, diplomatic, and institutional dimensions, with particular emphasis on bridging the gap between high-level agreements and practical implementation at national and regional levels.
How can we responsibly deploy emerging technologies like AI and quantum computing in critical infrastructure while addressing potential vulnerabilities?
Responsible Deployment of AI and Quantum Computing in Critical Infrastructure
The deployment of emerging technologies like AI and quantum computing in critical infrastructure presents both significant opportunities and challenges that require careful consideration of vulnerabilities and responsible implementation strategies.
AI Deployment Challenges and Solutions
Several sessions highlighted the evolving threat landscape that AI brings to critical infrastructure. As noted in the Open Forum #45, countries need to “think about how do I enhance my maturity, sharpen my responsiveness, adapt to the new challenges that, for example, AI brings, and even maybe prepare for things like what would a quantum future look like.”
The Day 0 Event #258 emphasized that “AI has enhanced the sophistication of social engineering and phishing campaigns, further expanding the ransomware threat landscape.” However, AI can also be part of the solution, with discussions about “creative uses of artificial intelligence tools to really counter ransomware.”
Governance and Risk Management Frameworks
The Dynamic Coalition Collaborative Session stressed that “AI governance in IoT risk and regulations, as AI is both an enabler and a risk in IoT” and emphasized the need for “systems to be transparent, ethical, secure, auditable, and subject to human oversight.”
The WS #193 outlined specific policy measures for resilience, including the need to “ensure zero trust by design for AI systems” and that “policy needs to mandate AI threat modeling as well as red teaming for these AI systems.”
The WS #257 recommended starting with “established frameworks like NIST’s AI Risk Management Framework or international certifications like ISO 42001” and emphasized that “the human-in-a-loop approach is also critical.”
Quantum Computing Threats and Responses
The quantum computing threat to critical infrastructure was addressed in several sessions. The Dynamic Coalition Collaborative Session explained that “the advancement of quantum computing will pose, already pose, a significant threat to our current internet security” and discussed the “harvest now, decrypt later” risk.
The Launch Event #169 highlighted that the threat is “particularly acute for IoT devices, which are increasingly integrated into our daily lives and critical infrastructure.”
The WS #193 emphasized that “right now it’s a race against time the current encryption we have in place how does it stand against quantum computing or quantum power computers” and recommended the need to “think more post quantum cryptography to protect systems especially like AI and other powerful systems that could be exploited.”
Critical Infrastructure Context
The Protection of Subsea Communication Cables session highlighted how “technological innovation, particularly artificial intelligence, is reshaping the landscape” and that “the training and deployment of large AI models demand massive computational capacity, as we know, and energy-intensive data centers, which, in turn, depend on robust, high-capacity connectivity, also submarine cables.”
The Day 0 Event #174 provided academic perspective, noting that “cybersecurity frames emerging technology as both a threat and a solution” and that current evaluation methods “prioritizes cutting edge technology while overlooking older critical dependencies.”
Lifecycle Approach to Governance
The WS #123 emphasized that “governance is not something that can be added on after the fact. It’s not an afterthought. It needs to be something which is designed to fit in each stage of the life cycle” and noted that “industry actors often are the first to encounter and understand AI risks and vulnerabilities, in part due to their direct involvement in developing and deploying these technologies.”
Implementation Challenges
The discussions revealed that while frameworks exist, implementation remains challenging. The Launch Event #96 highlighted that for quantum-resistant solutions, “the standards to be in place and accepted in a broad way” are needed before widespread implementation.
The discussions across multiple sessions demonstrate that responsible deployment of AI and quantum computing in critical infrastructure requires a multi-faceted approach combining technical solutions, governance frameworks, international cooperation, and lifecycle management strategies to address both current and emerging vulnerabilities.
How to establish universal baseline or minimum cybersecurity requirements for critical infrastructure protection across jurisdictions?
Establishing Universal Baseline Cybersecurity Requirements for Critical Infrastructure
The question of establishing universal baseline cybersecurity requirements for critical infrastructure protection across jurisdictions was addressed across multiple sessions at IGF 2025, revealing both the urgent need for such standards and the significant challenges in achieving them.
The Fragmentation Challenge
The core challenge was clearly articulated in Open Forum #45, where Timea Suto highlighted “we have a huge issue with fragmentation, not a shared global understanding of what constitutes critical infrastructure, with definitions and legal frameworks differing widely between countries, and in some cases missing altogether.” She emphasized the need for “smarter policy” rather than more regulation.
Leveraging Existing Standards and Frameworks
Several sessions emphasized building upon existing standards rather than creating entirely new frameworks. In WS #193, Samaila Atsen Bako advocated for leveraging established standards, citing the example of “the Open Web Application Security Project, OWASP, that released this IoT project… This means that both the manufacturers and users have a guide, and even regulators can choose the guide as a foundation or template for what the baseline security will look like when it comes to IoT devices, and then when that is enforced by a regulator, then you’ve raised the security bar in IoT devices globally, because standards are recognized globally.”
Regional Approaches and Global Cooperation
The Dynamic Coalition Collaborative Session highlighted uneven implementation globally, with Liz Orembo noting that while “Taiwan, Netherlands, Italy and the US” have incorporated global standards, “when you go to African Union they really don’t have IT procurement standards there like the European Union or even the US itself.” Jonathan Cave discussed the need for “mutual recognition arrangements” and “schemes to deal with these problems, including labeling schemes…and certification schemes” to address cross-border challenges.
International Cooperation and Norms
The GigaNet Academic Symposium highlighted existing international frameworks. Joanna Kulesza noted that work “around cybersecurity through the United Nations” “has resulted in 11 norms of responsible state behavior” which “are clearly applicable also to cyberspace infrastructures” and suggested this “is the time to make it more harmonized and operational.”
The Challenge of One-Size-Fits-All Solutions
In WS #193, when directly asked about universal cybersecurity standards, Osei Keija responded that “There’s nothing like one-size-fits-all, like a silver bullet when it comes to security or, say, cybersecurity issues. But I will say that, as I mentioned, security without human rights is brittle. Whatever we are designing, it must take into account the people.”
Sector-Specific Approaches
The Protection of Subsea Communication Cables session demonstrated sector-specific progress, with Kent Bressie referencing “The ICPC launched in 2021 its best practices for governments for cable protection and resilience” which includes “very specific best practices” for governments.
Addressing Basic Vulnerabilities
The ransomware accountability session highlighted fundamental issues, with Ambassador Dowling noting that “attacks succeed because of basic vulnerabilities” and “we’re not doing enough to patch, because technology companies are not making it easy enough to upgrade software and to replace end-of-life hardware.”
The discussions revealed that while establishing universal baseline cybersecurity requirements remains challenging due to fragmentation and varying national approaches, there is growing recognition of the need for better coordination, leveraging existing standards, and ensuring that solutions are both technically sound and respect human rights.
How can we ensure that provisions of the UN cybercrime convention are not misused for political prosecution? And how can future protocol negotiations be used to strengthen human rights safeguards while maintaining core provisions for addressing cybercrime?
Ensuring Human Rights Safeguards in the UN Cybercrime Convention
The question of preventing misuse of the UN cybercrime convention for political prosecution and strengthening human rights safeguards received limited but significant attention across several IGF 2025 sessions, with civil society expressing substantial concerns about the convention’s current provisions.
Civil Society Concerns About Current Provisions
The most detailed discussion occurred during the Networking Session on Cyber laws and civic space, where Daniela Alvarado Rincon highlighted ongoing concerns: “So very recently, we adopted the UN Cybercrime Convention, for example. And despite, again, many concerns from human rights actors and civil society actors, some of the things that were in the final draft still raise concerns about broad language, about clauses that may open the door for things that could be misused.”
Parliamentary Perspectives on Safeguards
During the Parliamentary session on striking the balance between freedom of expression and cybercrime, experts who participated in the treaty negotiations shared their concerns. Mallory Knodel, reflecting on her involvement in the process, noted that “for the countries in the room that have signed that potentially will be part of that treaty, I think again it’s important to think of the standards and the safeguards that a lot of civil society organizations feel are insufficient, that you jurisdictionally can of course exceed them.”
Paul Ash expressed even stronger concerns in the same session, stating: “I’ve watched things like the cyber crime convention that I think have the potential to be deeply, deeply harmful to all of your constituents if they’re not operated well.”
Advocacy Strategies and Future Protocols
Regarding future protocol negotiations, civil society advocates emphasized the importance of continued engagement. Christian Leon Coronado mentioned that the Al Sur coalition worked on advocacy “in relation to the U.N.’s International Cybercrime Treaty” through collaborative mechanisms with global organizations.
Alvarado Rincon suggested that civil society should continue to “influence as much as possible international instruments dealing with this topic, because those become, somehow, standards then for the governments.”
Implementation Gaps and Opportunities
The NRI Collaborative Session on navigating global cyber threats highlighted implementation challenges, with Lia Hernandez noting: “We have two main cybercrimes conventions in the world, the Budapest Convention and the recently approved United Nations Cybercrime International Convention. I know that not all of us, we are agreed with the test of these two conventions. But most of the governments of our region have approved and have signed Budapest. But till now, they haven’t adequate their local legislation.”
While the convention was mentioned in other sessions, including the Day 0 Event on fighting global ransomware and various other forums, the specific concerns about political prosecution and human rights safeguards were not addressed in depth, suggesting this remains an area requiring continued attention and advocacy.
How might quantum computing disrupt existing encryption standards and global cybersecurity frameworks? What steps are being taken to promote quantum-safe cryptography?
Quantum Computing’s Impact on Encryption and Cybersecurity: A Growing Concern
The Internet Governance Forum 2025 saw significant attention to quantum computing’s potential disruption of current encryption standards and cybersecurity frameworks, with several sessions highlighting this as an emerging critical issue for global digital security.
The Quantum Threat to Current Encryption
The most comprehensive discussion occurred during the Report Launch session, where Elif Kiesow Cortez explained the specific nature of the quantum threat: “this particular quantum computer that we refer to is defined as a cryptographically relevant quantum computer, so it has a focus, it has the capacity, and it can be utilized for breaking the currently valid encryption, including RSA.”
A particularly concerning aspect discussed was the “Harvest Now, Decrypt Later” threat, where “encrypted data can be recorded right now, and those recordings can be decrypted once malicious actors are able to utilize a cryptographically relevant quantum computer.” This threat was further elaborated in the Dynamic Coalition Collaborative Session, where it was described as “malicious actors might be recording today’s encrypted communications for days or months or longer with the aim to decrypt them once they can utilise a cryptographically relevant quantum computer.”
The Concept of “Q-Day” and Urgency
Several sessions referenced the concept of “Q-Day” – the moment when quantum computers become capable of breaking current encryption. During the Taking Stock session, Wouter Natus emphasized the urgency: “What we’ve identified is exactly what the implications are if we do not get this right before the so-called quantum day which is the day the first quantum computer actually works and we have a time frame now to make sure that we have the security that we need in place.”
The psychological and social implications were highlighted in the Cybersecurity Odyssey workshop, where Lily Edinam Botsyoe noted: “And they’re talking about Q-Day in this article, and they’re saying, okay, Q-Day is probably looming around, and what Q-Day looks like is a day where everything that is encryption-based or encryption-protected could fail.”
Steps Toward Quantum-Safe Cryptography
NIST (National Institute of Standards and Technology) emerged as a key player in developing post-quantum cryptography standards. In the Report Launch session, it was noted that “they already have guidelines that is on which PQC algorithms are recommended, but also the fact that they set 2035 as a target for all federal systems to migrate to PQC encryption.”
The Internet Engineering Task Force (IETF) is also actively working on implementation, with discussions noting that “IETF is currently working in the TLS 1.3, that would bridge the classical cryptography apart with the post-quantum cryptography.”
Practical Implementation Challenges
The transition presents significant practical challenges, particularly for IoT devices. An online participant raised concerns about “with the rapid progress in post-quantum cryptography and the reality that many existing IoT devices cannot be adapted due to firmware and hardware limitations. How should government and regulators prepare for the imminent wave for cryptographical obsolete devices” during the E-Waste Management workshop.
Testing initiatives are also underway, with the Internet Standards Testing Community event confirming that post-quantum cryptography support “will be added to the Internet.nl testing suite in due time” to ensure “the ciphers used by your web and email server are indeed quantum proof.”
Research and Policy Initiatives
Multiple research initiatives were highlighted across sessions. The Dynamic Coalition on Internet Standards, Security and Safety produced a comprehensive report on post-quantum cryptography, as mentioned in several sessions. During the IGF Intersessional Work Session, Wout de Natris van der Borght explained that their “latest report, IS3C presented here at the IGF, IS3C points out that the social economic implications of ICTs are not secure by design before the emergence of quantum computing and provides recommendations how to reach that stage before the so-called quantum day.”
Future Outlook
The discussions consistently positioned post-quantum cryptography as a priority issue for 2026 and beyond. As noted during the Taking Stock session: “I think that is going to be the topic of 2026. I heard the questions about quantum computing pop up in different sessions.”
The urgency of addressing quantum computing’s impact on encryption was clearly established across multiple IGF 2025 sessions, with concrete steps being taken through standards organizations, testing initiatives, and policy research to prepare for the post-quantum era.
What are the geopolitical and economic implications of the global race for quantum supremacy?
The global race for quantum supremacy and its geopolitical and economic implications received limited discussion across the Internet Governance Forum 2025 sessions. While most sessions did not address this topic directly, some relevant observations emerged from discussions on related technologies and cybersecurity.
The most substantive mention came from the Dynamic Coalition Collaborative Session, where Elif Kiesow Cortez highlighted the divergent national approaches to post-quantum cryptography, noting “US and EU, they have some distinct but also converging approaches” and referenced specific national programs including “France through ANSI also advocating for hybrid solutions, or Germany through BSI providing guidance” as well as developments “in the Netherlands as well, we have a PQC migration handbook.”
Additionally, quantum computing was briefly referenced in the context of cybersecurity threats during WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust, specifically regarding the need for post-quantum cryptography preparations.
The discussions suggest that while the broader implications of quantum supremacy were not extensively explored, there is growing recognition among different nations and regions of the need to prepare for quantum computing’s impact on cybersecurity infrastructure, with various countries developing distinct but sometimes converging approaches to post-quantum cryptographic standards.
How can we ensure that quantum computing developments are accessible and beneficial beyond a handful of leading countries or corporations?
The question of ensuring quantum computing developments are accessible and beneficial beyond a handful of leading countries or corporations received limited attention across the IGF 2025 sessions reviewed. Only two sessions touched upon this critical issue, with minimal substantive discussion.
The most relevant discussion occurred during the Launch / Award Event #169 Report Launch: Quantum encryption: blessing or havoc?, where Wout de Natris briefly addressed the global equity concerns, stating that “the whole world starts acting, that nobody’s left behind, that developing nations are assisted to set the steps they need to take. Because if they are vulnerable, we are vulnerable.” However, this discussion remained brief and did not delve into specific mechanisms or strategies for ensuring equitable access.
The Dynamic Coalition Collaborative Session provided a tangential perspective through Elif Kiesow Cortez’s recommendations emphasizing “creating global standards that look at interoperability” to ensure “we are not leaving any organizations, countries, people behind when it comes to making our more secure movement towards the better Internet.” While this touched on inclusivity principles, it did not specifically address quantum computing accessibility.
Overall, the question of quantum computing equity and global accessibility remains largely unaddressed in the IGF 2025 discussions, representing a significant gap in the dialogue around emerging technologies and digital inclusion.
What are the ethical, legal, and societal implications of emerging neurotechnologies? Do we need a new framework for mental privacy and cognitive freedom?
The ethical, legal, and societal implications of emerging neurotechnologies received minimal attention across the Internet Governance Forum 2025 sessions, with only brief mentions in two workshops.
The most substantive reference came from Open Forum #82, where UNESCO’s Mariagrazia Squicciarini highlighted the organization’s recent work on neurotechnologies. She noted that “The latest recommendation that has been worked on is about neurotechnologies and the impact they have on rights, on the people, on, again, what society wants them to do or not to do.” She emphasized particular concern about “the crossroad between AI and neurotech, because that’s where the biggest impact may be on societies.”
A related concern about mental privacy emerged in Workshop #395, where Nieves Molina touched on the evolving nature of freedom of thought in the digital age. She observed that “the issue of freedom of thought now takes a different connotation because technology actually my advance to that point that you that what you think can be reachable even if you don’t announce it as technology will advance and we will have to take decisions on whether the legislation that we have covers.”
Despite these brief acknowledgments of the importance of neurotechnology governance and mental privacy protection, there was no comprehensive discussion about the need for new frameworks for mental privacy and cognitive freedom across the forum sessions.
How might advances in biotechnology and neurotechnology blur the line between humans and machines, and what new rights or protections might be needed as a result?
Biotechnology and Neurotechnology: Blurring Lines Between Humans and Machines
The question of how advances in biotechnology and neurotechnology might blur the line between humans and machines, and what new rights or protections might be needed, received limited but notable attention across the Internet Governance Forum sessions.
Brain-Computer Interfaces and Human Enhancement
The most direct discussion occurred in High Level Session 3: AI & the Future of Work, where Ishita Barua highlighted the transformative potential of brain-computer interfaces. She noted: “And perhaps most astonishing, brain-computer interface in combination with AI. With the help of these devices, people with paralysis can regain the ability to move, speak, and even write through direct decoding of brain activity.” However, the session did not delve into the broader implications for human-machine boundaries or necessary protections.
International Governance and Rights Framework
A more comprehensive approach was discussed in Open Forum #82: Catalyzing Equitable AI Impact, where UNESCO’s work on neurotechnology governance was highlighted. Mariagrazia Squicciarini explained: “The latest recommendation that has been worked on is about neurotechnologies and the impact they have on rights, on the people, on, again, what society wants them to do or not to do. And the special attention is also put at the crossroad between AI and neurotech, because that’s where the biggest impact may be on societies.”
Gap in Discussion
Despite the significance of this topic for future internet governance and human rights, the majority of IGF sessions did not address the convergence of biotechnology, neurotechnology, and digital systems. This suggests a need for more focused attention on these emerging technologies and their implications for human identity, rights, and protections in future governance discussions.
In what ways could virtual reality and the metaverse reshape education, labour, and social interaction? How do we address the risks of exclusion, addiction, or manipulation in immersive environments?
The discussions across various IGF 2025 sessions revealed limited but emerging attention to virtual reality and metaverse technologies, with most focus on potential applications rather than comprehensive risk assessment.
Educational Applications and Opportunities
Several sessions highlighted the transformative potential of VR in education. In the Open Forum on DPI and Open Source AI, Judith Okonkwo discussed practical implementations: “VR for Schools initiative that we have, and this is now looking at deploying this technology in really resource-constrained learning environments, so how can we go into a situation where, for example, you have a school without, you know, infrastructure for things like science experiments, right, and that kind of resource-constrained environment, can you bridge the the gap with immersive tools, right, can you create a VR lab where students are then able to do simulations.”
The Lightning Talk on Fundamental Rights in Metaverse presented compelling statistics, with Avinash Dadhich noting that “research says that by 2026, more than 70% of students in UK, they will be in the virtual world, in the 3D world for at least one hour in a day” and predicting that “maybe next 10, 15, 20 years we will be a museum of mobiles, and then there will be a virtual world where people will be interacting.”
Social Interaction and Accessibility
The discussions revealed interesting applications for social interaction, particularly for marginalized groups. The Creative Workshop on Tech-Driven Solutions explored how “metaverse if if if it’s completely online we would have more control let’s say if it’s online to do the payments of some of some digital transactions and some people are very visual let’s say which may be not not so into social interactions but very visual so metaverse could help online or it could be augmented reality” could assist people with disabilities.
Indigenous communities are also exploring these technologies. In the Open Forum on Indigenous Peoples’ Languages, Outi Kaarina Laiti discussed “developing content in platforms like Second Life, Minecraft and so on, which I call the indigenous metaverse. It’s growing rapidly and most of these platforms are private-owned.”
Risks of Exclusion and Digital Divides
The risk of exclusion was prominently addressed in the Dynamic Coalition Collaborative Session, where Jutta Croll asked: “Do you think we are at risk of having such a, like a second digital divide in regard of emerging technologies?” Dino Cataldo Dell’Accio confirmed this risk exists, highlighting the concept of a ‘metaverse divide’ and emphasizing the need for ‘inclusiveness by design’ in emerging technologies.
Manipulation and Privacy Concerns
Concerns about manipulation were raised in the Lightning Talk on Fundamental Rights, where Anurag Vijay illustrated how “I was speaking to my friend over phone about a Rado watch… The next day, ads were flashing all over my social media” and questioned whether “Do you think we need to be protected in that way? Because our decision is being influenced by the artificial intelligence.”
Governance and Child Protection
Child protection in metaverse environments was addressed in the IGF Intersessional Work Session, where Jutta Croll mentioned that current issues include “child rights, recognition while shaping and developing the metaverse, or the engagement in developments of privacy and anonymity preserving mechanisms of age verification solutions as a precondition for a secure and safer usage of the digital environment.”
The broader governance challenge was acknowledged in High Level Session 5, where Fabrizia Benini from the European Commission questioned whether current governance models would remain robust “facing, in face, of the new technologies? Will the governance model hold?”
Overall, while the discussions showed growing recognition of VR and metaverse potential, there was limited comprehensive analysis of labor market impacts or detailed strategies for addressing addiction risks, indicating areas requiring further attention in future governance discussions.
Who sets the rules of engagement and content moderation in the metaverse? What governance models are emerging?
The question of metaverse governance and content moderation rules was primarily addressed in the Lightning Talk #143 Fundamental Rights in Metaverse session, where several governance models were explored.
Utkarsh Leo outlined different approaches to metaverse governance, explaining the centralized model: “if you look at a more centralized-based model wherein there is one company, let’s just for the sake of saying, let’s say if meta has all control over the metaverse, then in a way we’re leading towards a more monopolistic form of a world where perhaps suppression can happen.”
As an alternative, he presented the decentralized approach: “Other proposals say that let’s have a very decentralized world which focuses on blockchain technology and a distributed consensus-based model.”
The discussion concluded with a proposed hybrid solution: “a hybrid model wherein you may have a central entity, but at the same time certain rights are recognized through blockchain and distributed consensus-based mechanisms.”
This topic was not substantively discussed in other IGF 2025 sessions, indicating that metaverse governance remains an emerging area requiring further attention in internet governance forums.
Could blockchain offer meaningful solutions for transparency and decentralisation in governance, or are its promises overstated relative to its environmental and scalability concerns?
Blockchain in Governance: Limited Discussion Reveals Mixed Perspectives
The Internet Governance Forum 2025 sessions provided limited but revealing insights into blockchain’s potential for governance solutions, with discussions scattered across technical implementations rather than comprehensive governance analysis.
Technical Applications and Cybersecurity
The most substantive technical discussion emerged from the ransomware accountability session, where Francesca Bosca highlighted blockchain’s cybersecurity applications: “I do see, and I remember, I mean, doing some research on how, for example, blockchain can be used as a sort of like, not, let’s say, the black sheep when it comes to cybersecurity, but on the opposite, and there are some practical implementation areas I’m thinking about, like the threat intel sharing, for example, that can be extremely beneficial when we think about cybersecurity.” She emphasized blockchain’s strength in “distributed architecture and the consensus mechanisms, and obviously the fact that you have, I mean, the key strength of the blockchain resides basically in the immutable data ledger.”
Successful Implementation Examples
The Business Engagement Session provided a concrete success story from the UN pension fund. Rosemarie McClean described their blockchain implementation for proof of life verification: “So fast forward to today, almost 60% of our pensioners worldwide take advantage of this technology to meet that annual requirement. And it also has resulted, obviously, in a significant reduction in paper. So it has advanced our strategy to be a paperless organization. And ultimately, it won the Secretary General’s award for sustainability and innovation.”
Environmental and Scalability Concerns
Environmental concerns were notably raised in the DNS Trust session, where Andrew Campling highlighted “the compute cost, the environmental cost” of blockchain operations. The AI for Sustainable Development session drew parallels between blockchain’s evolution and AI’s energy challenges, with Oluwaseun Adepoju noting the “transition from proof-of-work to proof-of-stake in blockchain” as a model for addressing energy consumption.
Decentralized Infrastructure and Community Networks
The Community Based Connectivity session explored blockchain’s potential for decentralized infrastructure. Henry Wang discussed “blockchain protocol can actually help the community to build their own network based on mesh network, which is a peer-to-peer connections” and emphasized data sovereignty: “upon this layer, we’ll have the Web 3.0, which actually decentralizes the data ownership, which means for emerging countries, they don’t have to turn over their data.”
Technological Hype Cycles and Reality
The WSIS+20 Technical Layer session provided perspective on blockchain’s place in technology hype cycles. Israel Rosas observed: “The thing is that if we take a time machine, like, I don’t know, five years, seven years before this one, probably the conversation wouldn’t be around artificial intelligence, but blockchain.”
Overall Assessment
The discussions revealed that while blockchain shows practical applications in specific governance contexts—particularly in identity verification, digital infrastructure, and transparency mechanisms—the sessions lacked comprehensive analysis of its broader governance potential versus environmental costs. Most references were brief mentions rather than substantive evaluations, suggesting that blockchain has moved beyond the peak hype cycle but remains relevant for targeted applications rather than transformative governance solutions.
What role should international institutions play in setting norms for advanced technologies that challenge traditional regulatory frameworks?
The Role of International Institutions in Setting Norms for Advanced Technologies
The discussions across IGF 2025 sessions revealed a strong consensus that international institutions must play a central role in governing advanced technologies that challenge traditional regulatory frameworks, while emphasizing multi-stakeholder approaches and inclusive participation from the Global South.
The Imperative for International Cooperation
Multiple speakers emphasized that technological challenges transcend national borders and require coordinated international responses. As noted in the High Level Session on AI & the Future of Work, “I’m very sure that we address these challenges best with global cooperation. I don’t think that every single nation can make up some kind of framework that will protect people from any kind of risk. We have to find international regulations.” This sentiment was echoed in the Open Forum on Shaping Global AI Governance, where Ambassador Noorman stated that “no country can solve this alone. All AI transcends borders. So must be our response.”
UN Leadership and Global Frameworks
The United Nations emerged as a critical institution for establishing global norms. In the Open Forum on Building an International AI Cooperation Ecosystem, Wolfgang Klauerwachter argued that “If it’s a global problem, we need all countries on the table, and only the United Nations offers this opportunity.” The Opening Ceremony highlighted key UN initiatives, with Li Junhua noting negotiations for establishing “the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on AI Governance within the United Nations.”
The Global Digital Compact was repeatedly cited as a foundational framework. In Main Session 2, the panel discussed how this compact represents “the first time every UN member state in the world came together to agree a path on AI governance.”
Regional and Specialized International Bodies
Regional institutions also play crucial roles in norm-setting. The Council of Europe’s work on AI was highlighted in multiple sessions. In the High Level Session on Information Space, Bjorn Berge explained that “the Council of Europe has now agreed and concluded a new international treaty on Artificial Intelligence and Human Rights, Democracy and Rule of Law. The first international treaty of its kind.”
UNESCO’s role in setting ethical standards was emphasized across multiple sessions. In the Workshop on AI Innovation, Guilherme Canela de Souza Godoi noted that “more than 70 UNESCO member states have already implemented the readiness assessment methodology.”
Cybersecurity and Critical Infrastructure Norms
For cybersecurity governance, the Open Forum on Cyber Resilience highlighted the UN framework for responsible state behavior. Pavel Mraz explained that “The UN framework for responsible state behavior in cyberspace… does provide a strong foundation for protecting critical infrastructure.”
Multi-Stakeholder Governance Model
A consistent theme was the need for multi-stakeholder approaches rather than purely state-centric processes. The Workshop on WSIS+20 raised concerns about “risks of more state-centric process to the appointment of experts, exclusion of military applications from the scope of the assessments” in AI governance mechanisms.
Challenges and Limitations
Several speakers acknowledged significant challenges in international norm-setting. In the Workshop on Bridging Internet AI Governance, Bill Drake expressed skepticism, noting that “multilateral regulatory interventions are impossible to contemplate in the Trump era” and questioned what kind of binding international agreements could realistically be negotiated.
The complexity of coordination was highlighted in the High Level Session 5, where Thomas Schneider noted that “you will have a distributed system with thousands of actors in the end involved in actions on global regional and national levels and you can’t put them all you can’t even put them all in one in one room”.
Global South Inclusion and Equity
A critical concern raised throughout the discussions was ensuring Global South participation in norm-setting processes. In the Open Forum on High Level Review of AI Governance, Abhishek Singh emphasized “we need to bring countries of Global South at the decision-making tables.”
Interoperability Over Uniformity
Rather than seeking uniform global governance structures, many speakers advocated for interoperability between different regulatory approaches. In the Workshop on AI Policy Research, Anne Flanagan argued that “We’re never going to have a global regime. We’re never going to have a single global governance structure. It’s not realistic. It’s also not appropriate” because different regions have different cultural contexts and priorities.
Implementation and Enforcement Challenges
The gap between norm-setting and implementation was a recurring concern. In the Open Forum on Autonomous Weapon Systems, Chris Painter noted the challenge that “There’s now a dispute about what those norms mean, that’s what international law means, and the biggest part… is how do you have accountability once there’s agreement among countries.”
The discussions revealed a complex landscape where international institutions are seen as essential for addressing the global nature of advanced technologies, while acknowledging significant challenges in achieving consensus, ensuring inclusive participation, and implementing effective governance frameworks that balance innovation with protection of rights and security.
Related topics
Related processes
Related technologies
- About WordPress