Day 0 Event #119 Roam X Driving WSIS Implementation and Digital Cooperation

Day 0 Event #119 Roam X Driving WSIS Implementation and Digital Cooperation

Session at a glance

Summary

This discussion focused on the ROAMX framework, UNESCO’s Internet Universality Indicators that measure progress on World Summit on the Information Society (WSIS) commitments and digital development goals. The ROAMX acronym stands for Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues like gender equality and sustainability. Dr. Tawfik Jelassi from UNESCO opened by explaining that while digital technologies evolve rapidly, 2.6 billion people remain offline, with significant disparities between high and low-income countries.


The framework has been implemented in over 40 countries since 2018, with second-generation indicators launched in 2024 that include new dimensions like AI governance and environmental impact. Brazil pioneered the framework’s implementation and recently completed assessment using the revised indicators, revealing both progress in digital public services and persistent inequalities, particularly affecting women and rural populations. Fiji piloted a new capacity-building workshop approach that revealed significant gaps in inter-governmental coordination, even after extensive consultation processes during strategy development.


Speakers emphasized that ROAMX serves not just as an assessment tool but as a comprehensive framework for the entire policy lifecycle, from planning to monitoring and evaluation. The discussion highlighted persistent challenges including data gaps, particularly around gender-disaggregated information, and the need for meaningful connectivity rather than basic access. Participants stressed the importance of multi-stakeholder engagement and the framework’s potential to support national and regional Internet Governance Forums. The session concluded with calls for broader adoption of ROAMX as a strategic tool for inclusive digital transformation that leaves no one behind.


Keypoints

## Major Discussion Points:


– **ROAMX Framework Overview and Evolution**: The discussion centered on UNESCO’s ROAMX (Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues) framework for measuring digital development and WSIS implementation. Speakers highlighted the launch of second-generation indicators in 2024, which include new dimensions like AI governance, environmental impact, and meaningful connectivity.


– **Country Implementation Experiences**: Detailed presentations of ROAMX applications in Brazil (as the first pilot country implementing revised indicators) and Fiji (featuring a new capacity-building workshop approach). Brazil’s assessment revealed advances in digital public services but persistent inequalities, while Fiji’s experience demonstrated gaps in inter-governmental coordination despite consultation efforts.


– **Data Gaps and Gender Digital Divide**: Multiple speakers emphasized the critical lack of disaggregated data, particularly sex-disaggregated data, which hampers effective assessment of digital inclusion. The persistent gender digital divide was highlighted as a key challenge, with women underrepresented not just as users but as creators, decision-makers, and leaders in technology sectors.


– **ROAMX as a Multi-Purpose Tool**: The framework’s versatility was emphasized – it serves not only for periodic national assessments but also as a planning tool for strategy development, implementation monitoring, and evaluation. Speakers noted its potential to connect with national and regional Internet Governance Forums and support evidence-based policymaking.


– **Integration with WSIS Plus 20 and Global Digital Cooperation**: The discussion positioned ROAMX as a strategic tool for measuring progress on WSIS commitments and supporting the upcoming WSIS Plus 20 review, emphasizing its role in ensuring digital transformation remains human-centered and rights-based.


## Overall Purpose:


The session aimed to demonstrate how UNESCO’s ROAMX framework can drive WSIS implementation and digital cooperation by providing concrete examples of country applications, showcasing the framework’s evolution with second-generation indicators, and positioning it as a key measurement tool for the WSIS Plus 20 review process.


## Overall Tone:


The discussion maintained a consistently professional and collaborative tone throughout. It was informative and forward-looking, with speakers sharing practical experiences and lessons learned. The tone was optimistic about the framework’s potential while being realistic about persistent challenges like data gaps and digital divides. There was a strong emphasis on multi-stakeholder collaboration and inclusive approaches, reflecting the participatory nature of both the ROAMX framework and the broader Internet governance community.


Speakers

**Speakers from the provided list:**


– **Tatevik Grigoryan** – Session moderator, UNESCO staff member working on the ROMEX initiative


– **Tawfik Jelassi** – Assistant Director General of UNESCO for Communication and Information, delivered keynote remarks


– **Fabio Senne** – Project Coordinator at the Regional Centre of Studies on Information and Communication Technologies (CETIC.br), UNESCO Category 2 Institute; involved in initial IUI framework development and Brazil’s pilot assessments


– **Davide Storti** – Program Specialist at UNESCO for Digital Policies and Transformation, coordinates UNESCO’s WSIS-related activities (participated online)


– **Dorcas Muthoni** – Founder and Chief Executive Officer of Open World, a specialist computer software company established in Kenya; works on gender digital divide and women in technology leadership (participated online)


– **Guy Berger** – Described as “the father of the ROMEX” and regional ROMEX indicators; audience member who provided commentary


– **Chris Buckridge** – Independent consultant, analyst, and commentator in Internet governance and digital policy space; worked for over two decades with regional Internet registrars including APNIC; current MAG (Multi-stakeholder Advisory Group) member


– **Anriette Esterhuysen** – Human rights defender and computer networking pioneer from South Africa; former chair of the multi-stakeholder advisory group of the IGF; former executive director of the Association for Progressive Communications (APC); involved in ROMEX development and implementation


**Additional speakers:**


– **Camilla Gonzalez** – UNESCO colleague working on the ROMEX initiative (participated online, mentioned but did not speak in the transcript)


Full session report

# UNESCO ROMEX Framework: Driving WSIS Implementation and Digital Cooperation – Discussion Summary


## Introduction and Session Context


This early morning “day zero” session at IGF 2025 examined UNESCO’s ROMEX framework and its role in driving WSIS implementation and digital cooperation. The hybrid online and in-person discussion, moderated by Tatevik Grigoryan from UNESCO, brought together international experts to share implementation experiences and explore the framework’s potential applications.


After resolving initial technical difficulties with headset channels, the session proceeded with presentations from UNESCO officials and implementers from Brazil, Fiji, and Kenya, followed by commentary from Internet governance experts.


The ROMEX acronym represents five core dimensions: Rights, Openness, Accessibility, Multi-stakeholder participation, and cross-cutting issues (the X) including gender equality and sustainability. Since its launch in 2018, the framework has been implemented in over 40 countries, with second-generation indicators introduced in 2024.


## Opening Keynote: Technology and Digital Divides


Dr. Tawfik Jelassi, UNESCO’s Assistant Director General for Communication and Information, opened by quoting historian Melvin Kranzberg: “technology is neither good nor bad, nor is it neutral.” He emphasized that technology’s impact depends fundamentally on human choices, values, and system design.


Jelassi highlighted persistent global digital inequalities, noting that 2.6 billion people remain offline worldwide. The disparities are stark: while 93% of populations in high-income countries use the internet, only 27% in low-income countries have access. He positioned ROMEX as a strategic tool for evidence-based policymaking that has already demonstrated concrete policy outcomes across its 40+ country implementations.


## Framework Evolution and Applications


Davide Storti, UNESCO’s Programme Specialist for Digital Policies and Transformation, explained that ROMEX serves as a translation mechanism, converting WSIS ideals into measurable outcomes while providing a common language for diverse stakeholders in digital governance.


The second-generation indicators launched in 2024 incorporate new dimensions including artificial intelligence governance and environmental impact assessment. Storti emphasized that ROMEX’s value extends beyond periodic assessments to support the entire policy lifecycle, from strategy development through implementation monitoring and evaluation.


## Country Implementation Experiences


### Brazil: Comprehensive Assessment and Findings


Fabio Senne from CETIC.br detailed Brazil’s experience as both the original pilot country in 2018 and the first to implement second-generation indicators. Brazil’s assessment revealed significant advances in digital public services, with the gov.br platform now offering 4,500 services to 160 million users.


However, the assessment also uncovered persistent inequalities. Disaggregated data revealed that black women showed substantially lower levels of meaningful connectivity compared to other demographic groups, highlighting intersections of racial and gender inequalities in digital access.


Senne emphasized the critical importance of multi-stakeholder engagement, which improved data quality by accessing information from civil society and private sector sources that government data alone could not provide. The assessment also revealed coordination challenges within government structures, with participation remaining fragmented across different departments despite Brazil’s established multi-stakeholder frameworks.


Brazil committed to completing multi-stakeholder validation of their revised assessment and launching the final report by September-October.


### Fiji: Capacity Building and Coordination Gaps


Anriette Esterhuysen, a human rights defender and computer networking pioneer from South Africa, shared insights from Fiji’s implementation using a new capacity-building workshop approach. The most striking finding was a significant gap in inter-governmental coordination: despite an extensive eight-month consultation process during development of Fiji’s national digital strategy, two-thirds of government departments were unaware of the strategy’s existence.


This discovery highlighted a critical disconnect between policy development processes and actual implementation awareness across government structures. Esterhuysen noted that while the strategy development had involved extensive consultation, the reality of cross-government awareness was far more limited than anticipated.


The Fiji experience demonstrated ROMEX’s potential beyond assessment, with Esterhuysen observing that the framework “works extremely well in assessing a strategy” and “could work as well as a planning tool” throughout the full policy lifecycle.


### Kenya: Gender Digital Divides and Data Gaps


Dorcas Muthoni, founder and CEO of Open World in Kenya, highlighted the persistent gender digital divide and critical lack of sex-disaggregated data across multiple dimensions of digital participation. This data gap makes it difficult to assess true gender disparities in technology adoption, usage patterns, and particularly leadership roles within the technology sector.


Muthoni emphasized challenges women face in progressing to technology leadership positions, describing “lonely career journeys” with limited role models and support systems. This leadership gap means women’s perspectives are underrepresented in technology design, policy development, and strategic decision-making processes.


## Expert Commentary and Framework Applications


### Multi-stakeholder Engagement and Data-Driven Governance


Chris Buckridge, an independent consultant and Internet governance expert, articulated the relationship between inclusive and evidence-based approaches: “data-driven, it cannot be comprehensive unless it’s inclusive… But at the same time, inclusive governance can’t be effective, can’t be practical unless it is data-driven.”


Buckridge highlighted ROMEX’s potential to foster sustainable multi-stakeholder engagement and complement national and regional Internet governance initiatives, including his experience with EuroDIG events.


### Digital Literacy and Rights Education


Esterhuysen emphasized limitations of current digital literacy approaches, noting that many programs are “vendor-driven or device-focused” and fail to address broader digital citizenship complexities. She advocated for comprehensive approaches connecting rights education and civic education with technical skills development.


Esterhuysen also noted communication challenges with terms like “Internet Governance Forum,” observing that people find the concept difficult to understand and don’t grasp that it involves all aspects of digital cooperation, not just narrow technical governance.


### Foundational Principles and Emerging Technologies


Guy Berger, introduced by Tatevik as “the father of the European Union and the regional ROMEX indicators,” emphasized that “Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services.” This perspective suggests foundational digital governance principles remain relevant as technologies evolve.


A brief exchange between Berger and Esterhuysen revealed different perspectives on terminology, with Esterhuysen suggesting “Internet universality” might not be “future-proof” while acknowledging that people readily understand the underlying ROMEX principles.


## Persistent Challenges and Gaps


### Data and Coordination Issues


The discussion consistently highlighted the lack of comprehensive sex-disaggregated data across countries, making it difficult to assess and address gender digital divides. Coordination challenges within government structures emerged as a common theme, even in countries with established consultation mechanisms.


### Environmental and Emerging Technology Governance


Senne briefly noted that environmental issues like energy consumption and electronic waste are “largely overlooked” in current digital policies. The integration of AI governance into the second-generation indicators reflects growing recognition of the need to address emerging technologies within existing frameworks.


## Integration with WSIS Plus 20 and Global Digital Cooperation


Storti positioned ROMEX as a strategic tool for the upcoming WSIS Plus 20 review process, emphasizing its role in translating WSIS ideals into measurable outcomes. The framework’s comprehensive coverage of WSIS commitments and demonstrated implementation across multiple countries provides concrete evidence for assessing global progress on information society development goals.


## Key Recommendations and Next Steps


The session concluded with several concrete recommendations:


– UNESCO called on governments, regulators, civil society, and stakeholders to embrace ROMEX as a strategic tool for digital transformation


– Participants encouraged national and regional Internet Governance Forums to explore using the ROMEX framework for their initiatives


– Speakers emphasized the importance of addressing data gaps by turning them into policy recommendations


– The discussion highlighted potential for developing collaboration between ROMEX assessments and existing national/regional initiatives


## Conclusion


The discussion demonstrated strong consensus among diverse stakeholders about ROMEX’s practical value while identifying important areas for continued development. The framework’s evolution from a periodic assessment tool to a comprehensive policy lifecycle instrument reflects its adaptability and growing recognition across different contexts.


The combination of theoretical framework and practical implementation experiences from Brazil, Fiji, and Kenya provided concrete evidence of both the framework’s utility and persistent challenges in digital governance. The session successfully moved beyond simple advocacy to critical examination of how comprehensive frameworks can be more effectively integrated into digital policy development and implementation processes.


Session transcript

Tatevik Grigoryan: … We should go there. No, I’m not connected. Okay. Let’s just… I’ll just… If you have a question… If you have a question… It’s 20, okay. Can you send it to me, you don’t have it. You don’t have, do you have? You want to send it to me now. Apparently they can hear. Okay, my colleague sent me the link and apparently they can hear me now. Good morning everyone, online and here in the room, thank you so much for joining. You need to put a headset to be able to… follow us. Please everyone could you could you use the headset otherwise you won’t be able to hear us and those in the room you should select channel number five. Apologies to the colleagues online we’ll wait for the for the audience here to put on their headset. Okay so it’s channel number five so thank you so much again for joining us early in the morning online and here in the room. We’re very pleased to host you in this session focusing on the role of Romex and we’ll go on telling you a bit more what Romex stands for and its role in measuring the WSIS implementation the action lines ahead of the WSIS plus 20 review. I’m very pleased to introduce you an excellent lineup of speakers here with me and we’re joined today in the panel by Dr. Tawfik Jelassi the assistant director general of UNESCO for communication and information will deliver opening remark the keynote and I think without any further ado I’ll give the floor to Dr. Jelassi and then we’ll go into the discussion and I’ll introduce my panel. Thank you so much ADG for being here and please we very much look forward to your keynote remarks. Thank you.


Tawfik Jelassi: Thank you very much Tawfik, the panelists, participants, friends and colleagues. I’m very pleased to join you for this session on Romex driving WSIS implementation and digital cooperation. I would like to thank IGF for their support to UNESCO and for providing this opportunity for us to have an exchange on this topic. I’m also grateful to our speakers and Chris, who will be shortly introduced by the moderator. Their expertise and commitment have been instrumental in advancing the UNESCO work on Internet universality. As the WSIS Plus 20 review is underway, we are reminded that digital technologies are evolving faster than the frameworks that are designed to govern them. And yet 2.6 billion people remain offline as of today, most of them in the least developed regions. In low-income countries, only 27% of the population uses the Internet, compared to 93% in high-income countries. The cost of access, the lack of infrastructure, the entrenched inequalities, including gender gaps, continue to hinder digital inclusion. UNESCO has been advocating for a rights-based, human-centered and inclusive vision for the digital age. This framework gives emphasis to openness, accessibility, multi-stakeholder governance and capacity building. To ensure that this vision is not only aspirational but actionable, we need the right tools to identify gaps, guide reforms and measure progress. And this is where the ROM-X framework comes in. Since its initial launch back in 2018, and with the second-generation indicators which we released last year, the ROM-X has become a strategic enabler for national digital assessments. It supports evidence-based policymaking by helping countries assess their digital needs. The Digital Ecosystems Through the Lens of the ROMAX Principles Those who are not familiar with it, let me briefly remind you the elements of the ROMAX are standing for Human Rights, O for Openness, A for Accessibility and M for Multi-Stakeholder Participation and the X refers to cross-cutting issues such as sustainability, gender equality and online safety The revised indicators include new dimensions such as AI governance, environmental impact, privacy and meaningful connectivity aligning the framework with the global milestones such as the Net Mondial Plus 10 and the Global Digital Compact So far, more than 40 countries have applied the ROMAX framework In Argentina, as an example, the National Digital Assessment informed a legislation to reform data protection laws In Paraguay, the National Statistics Office began collecting disaggregated digital data A ROMAX capacity building workshop took place in Fiji earlier this year and has inspired digital policy planning involving national stakeholders Countries like Brazil and Uzbekistan have begun the pilot implementation of our second-generation indicators These outcomes are not isolated They reflect a growing recognition that data-driven inclusive governance is critical for the digital age However, the digital divide continues to persist especially for women and girls who remain underrepresented online and in digital policy making The revised ROMAX indicators These measures maintain a strong emphasis on gender inclusion, digital literacy, affordability, cultural norms, and safety concerns. This brings me to our call to action. We urge governments, regulators, civil society, and all stakeholders to embrace the ROM-X as a strategic tool to drive digital transformation. It offers a robust, adaptable, and forward-looking methodology to monitor WSIS implementation, align with STG targets, and ensure a digital development that is transparent, equitable, and accountable. As the historian Melvin Kranzberg reminded us, technology is neither good nor bad, nor is it neutral. The impact of technology is shaped by human intent, by the choices we make, the values that we want to protect, and the systems we design. Let’s develop, use, and govern technology in ways that promote shared progress. Let’s put people, rights, and equity at the center of our digital future. We believe that with the ROM-X, we have the means to achieve that. Thank you for your attention.


Tatevik Grigoryan: Thank you so much, ADG, and thank you for setting the stage and giving a comprehensive overview of what ROM-X stands for, and giving a few examples of how we demonstrated its value and power. And now I’ll go, and thank you so much again for being here. I know you won’t be able to stay until the end, but we very much appreciate and value your presence. As you gave the overview of the ROM-X, just to also mention that in addition to demonstrating the value of ROM-X and showcasing a few examples, including how it has been introduced now in Brazil and Uzbekistan, the revised indicators, the session will also focus on demonstrating… on integrating the relevance of ROMEX in a framework in assessing the progress on WSIS commitments and the SDGs. And I’ll go forward and introduce my speakers in the speaking order, but not in the sitting order. Fabio Sene, who is a project coordinator at the Regional Centre of Studies on Information and Communication Technologies. CETIC.br, which is also a UNESCO Category 2 Institute. Fabio has been involved in, of course, the initial IUI framework development. Brazil was the one who piloted the first assessment and the first one to pilot the new revised indicators, which we launched in 2024, last year at the IGF. I have Anriette Esterhuysen, who is a human rights defender and computer networking pioneer from South Africa. She’s a pioneer in using everyone knows ideas in Internet and communication technologies. She’s a former chair of the multi-stakeholder advisory group of the IGF. She used to be the executive director of the Association for Progressive Communications, and she still continues work with the APC and with many other entities, including with UNESCO. She’s been instrumental in both development of the initial indicators, the revision, and also the implementation of the workshop in Fiji. Online, we are joined by Dorca Smutoni, who is the founder and chief executive officer of Open World, a specialist computer software company she established in Kenya when she was only 24 years old. And finally, I have Chris Buckridge to my right, who is an independent consultant, analyst, and commentator in the Internet governance and digital policy space. He worked for more than two decades with regional Internet registrars, starting with APNIC. He’s a current MAG member, and he has many other, has had and has many other roles, which I would like to invite him to share with us. I will not read out, and I am joined also online by two of my colleagues, Davide Storti, a program specialist at UNESCO for Digital Policies and Transformation. Davide is coordinating our activities related to WSIS, and I am also joined by my colleague Camilla Gonzalez, who also works on Romex initiative. Thank you again. I would like to start by giving the floor to my colleague Davide Storti, who would just give a little bit more overview on this interaction of Romex and WSIS, and how the idea came about using the Romex to measure the WSIS implementation. Please, Davide.


Davide Storti: Good morning, everyone. Thank you, Tawfik. Yes, so as Elie Jelassi has mentioned, the technology goes super fast, and UNESCO has highlighted already a number of occasions of different shifts that happened in technology and in society. Therefore, when considering the WSIS as a process and the action lines that are leading down the foundational aspiration of the WSIS process, like access, inclusion, rights, the Romex indicators translate these ideals into measurable outcomes. The connection between the WSIS plus 20 and the different challenges brought up by these shifts through the lenses of the IUI indicators are the possibility of measuring the advancement of these technologies, Artificial Intelligence, the impact of digitalization, the status of indicators like gender equality or rights online of population, and also have a measurement of data protection, trust in the media, and misinformation, for example. So this framework may actually help or support the measurement of how WSIS plus 20, the WSIS framework, which is based on principles, how this evolves, how this is anchored to the reality, by allowing to catalyze some evidence-based results and also collaboration among the different stakeholders of the WSIS process. It provides, I should say, a common language for different stakeholders, a country-to-country reporting, also analysis, also a way of comparison to highlight the different position of evolutions. As was mentioned, a lot of, a big chunk of population is not online yet, so there are different aspects to be taken into account, and also give inputs to dialogues like the IGF through national and regional analysis of the progress and give some sort of diagnostic for guiding different investments by country or different needs assessment, and also needs in terms of policies and regulations, etc. So in the different action lines of the WSIS, the ROMEX provides some grounds for tracking participatory and transparent digital policymaking, for example, or how to examine the connectivity and how affordability comes through and how digital skills come through, or maybe giving some granular ways to measure online safety, data protection, a strategy for even cyber security, etc. So there is an opportunity to have a framework which has already some measurement, which has already been applied in different countries, and the new revision also helps us to be more precise in this kind of measurement. So if used properly and I think the panel today will give a different point of view on this matter. I think the enabling of national-level evidence of the raw mix applied to different countries may give a better view of what is the global impact of the WSIS framework overall and also guide through the review, also the findings of the indicators in different countries that may provide also some grounds for the review itself and for the future of the WSIS as the review will come up with that. So I think I look forward for this discussion and I invite all the IGF stakeholders to consider the IUI framework as one significant basis for the process of the WSIS as it comes forward. Thank you.


Tatevik Grigoryan: Thank you so much, Davide, for your excellent intervention and for your call to indeed adopt, also approach raw mix under this lens. And I think before I give the floor to Fabio who will now focus on the application of raw mix and give a few examples and show us the first impressive findings of the implementation of the revised indicators, which looks like this. And Fabio, bravo indeed that you had such already progress since the launch of the indicators. I wanted to acknowledge the presence of Guy Berger, who is sitting in the audience, who is the father of the European Union. and the regional ROMEX indicators. Thank you so much, Guy, for being here and I hope we can hear from you afterwards. But now, Fabio, please just tell us a bit more about the ROMEX in Brazil and the new application and how do you think it was effective, the new revised indicators and how were they perhaps a bit different from the first experience. Thank you.


Fabio Senne: Okay, so thank you very much, Tawfik. Thank you. I acknowledge all the speakers and panelists. It’s a very pleasure to be here. And setic.br and nick.br and cgi.br is in the very beginning of the process of the ROMEX, also the creation of the framework and also the implementation. As you said, Brazil was the first country to pilot this framework back in 2018. And now we accepted the challenge that Mr. Jelassi presented us back in December in the past IGF to renew the data collection on Brazil on the new second generation version of indicators. And we accepted this challenge and concluded the data collection phase of the project. I will bring here some initial results. But, of course, it will go through a multi-stakeholder validation and we don’t have the full report yet. But I’ll bring to you a few main results. Just to mention, as I said, that Brazil was involved in the discussion of the framework along with lots of consultations on the multi-stakeholder community. And back in 2019, we launched the first assessment report of the country in the area. IGF 2019 in Berlin and then along this process we also supported other countries especially Latin American countries to also implement this methodology so we had lots of exchange during this period and from 2023 to 2024 we also supported UNESCO in the revision of the indicators the five years revision that was expected to be concluded by UNESCO and now we are presenting we are implementing the the next version here just to to highlight a few a few preliminary findings of the discussions first of all if you take the case of Brazil and this is important to say that in our case CETIC.br and NIC.br are responsible for data collection and to deal with the technical team that are collecting all the indicators from from multiple sources but we have a multi-stakeholder advisory committee with the CGI.br which is helping us and supporting us as advising the whole process we have a first meeting of CGI.br that validated the start of the the process and and now after the data collection we will have validation from from the CGI.br but if you take a few advancements and challenges that we have so far so in the past year years Brazil has seen an intensification in public institutional debate on platform regulation and information integrity as as David mentioned here is an also a WSIS plus 20 topic and driven by the growing impact of disinformation, hate speech and how this affects the democratic processes. And discussion has focused on the responsibility of digital platforms in moderating harmful content and protecting users’ rights, especially in the light of judicial interventions that took care in the country, especially in the electoral bodies. However, we don’t have yet, there is a lack of consensus on how to approve a specific legislation in the topic and the debate is still fragmented in different political interests. And while there is a legal framework in place in the country, anchored by the Marco Civil, the Internet and the LGPD, which is our local GDPR, the enforcement of this process is still uneven and there are still critical gaps persistent. If you take the openness dimension, it’s very interesting because over this past five years, we have huge advances in the provision of digital public services and also with this dimension of DPIs. So, for instance, we have the platform gov.br in Brazil, nowadays offers 4,500 services online with more than 160 million users. And these initiatives supported administrative processes and increased access to public information in a more participatory government. However, these gains are not equally distributed, so there is still significant inequalities in access to these digital online services, especially among populations with low digital literacy, limited connectivity or disabilities. So, there are usability… and Tawfik Jelassi, Anriette Esterhuysen, Alexandre Barbosa, Dorcas Muthoni and significant investments in bridging coverage gaps in the country in this period. However, and also the concept of universal and meaningful connectivity has entered in the national policy conversation and debate and being addressed in several strategic plans that are under discussion. But there is a growing recognition that we have challenges. Connectivity remains unevenly distributed with rural access and lower-income groups, especially low-income classes, facing disadvantages. Gender and racial disparities are also relevant. We show in the report, for instance, that black women present lower levels of meaningful connectivity over time and those are exacerbated by digital skill gaps and mobile-only access to this strata of the population. So there is a need for equity-driven strategies that address these overlapping dimensions. In the case of multi-stakeholder participation, Brazil has a legal and institutional architecture that provides a solid foundation for multi-stakeholder participation through the Marco Civil da Internet and the institutional role of CGI.br, which embodies the principles of collaborative democracy. This is a model of democratic and transparency governance. This model is internationally recognized and has supported inclusive dialogues such as the Brazilian IGF that is coordinated by CGI.br. However, if we take broader digital policies, multi-stakeholder participation remains inconsistent. In many ministries and regulatory environments, the inclusion of stakeholders is still fragmented in terms of participation. And finally, to conclude, if we take the cross-cutting issues, one of the new indicators that was included in the framework is related to AI development and governance. So, you can say that Brazil advanced in AI governance in the past few years with the launch of the National Artificial Intelligence Strategy and the National AI Plan. However, the governance framework for AI is still in progress with a national law under discussion in Congress. And crucial aspects such as transparency, risk assessment and rights-based safeguards are still unsolved. And also multi-stakeholder engagement when it comes to AI. And if you take one new indicator that is environmental issues, this is one that we saw largely overlooked in digital policies so far. So, there are still issues such as energy consumption, e-waste and emissions that are not yet well integrated into the governance framework. So, this is a challenge that we identified by having this new indicator proposed. So, just these few overall remarks. Just to say that we are now presenting here these preliminary results. We will enter now in the phases of validation in our multi-stakeholder discussion and plan to, by September, October, launch the final report. So that’s it. And you can later discuss more on the implications of this. Thank you very much.


Tatevik Grigoryan: Thanks so much, Fabio, for presenting the findings. And it’s very interesting to observe both the progress and also the issues that persist. And it’s also interesting to see the application of these newly introduced indicators. I look forward to reading the report. I would like to now give the floor to Anriette. And Anriette, I would like you to please focus on Fiji. This year, for the first time ever, we introduced, piloted a new intervention in the margins of the Romex framework. Following the assessment, we piloted this capacity building workshop to support the multi-stakeholder advisory board, but also the global, not global, but the stakeholder’s wider community in Fiji to implement the recommendations focusing on basically digital policy making, policy implementation, capacity building, and having Romex assessment as basis and evidence for that. Anriette, would you please focus on that?


Anriette Esterhuysen: Thank you, Tatjavik. Well, yes, it was a really interesting experience. So what we did was that Fiji, a relatively small country, had recently approved a national digital strategy. And they’d completed a national assessment using the Romex framework. So we tried to bring these together. I mean in the first thing I mean your question that you had in the script was you know What do you and I want to stress? I mean, I’m glad you’re not asking me all the questions because I prefer ad-libbing But there was one question that you asked me which I think is important, which is How what what should countries do To When they’re implementing digital strategies at a high level, what should they do? And I really think The answer for us was clearly consultation Collaboration and connections and what was I think the most powerful learning of this workshop? Which was a policy implementation workshop how you can use the Romex framework to support implementation of the national digital strategy was that Even after about an eight-month period of the people developing the national digital strategy Believing that they’ve consulted thoroughly two-thirds of the government departments And we must have had about eight different ministries did not know about the national digital strategy so there was this disconnect between the people who developed the strategy who were convinced that their consultation process was perfect and The people in the government departments who have to implement this that the strategy who’d never heard of it And I think that’s one thing you can never Underestimate the complexity of different parts of government. We’re not even talking about multi-stakeholder collaboration here. We’re talking about intergovernmental Collaboration about the complexity in them actually working together Collaborating understanding who’s doing what and how they can make the connections between the different issues and I think for us as The team coming from UNESCO and people who’ve been involved in the Fiji national assessment I think we had a really powerful discovery and that is that the Romex framework is not just suited to Assessing a national internet environment. It actually works extremely well in assessing a strategy and before it’s being implemented it could work as well as a planning tool and assessing a design of a strategy or design of implementation and and Equally at the level of monitoring and evaluation so in fact what we found is that the Romex framework is suited to the full lifecycle of policy development and implementation from design to monitoring learning and evaluation and I think sometimes we forget people always talk about the Indicators, but they forget that the indicators actually are there to help you answer the primary modality of the framework, which are questions. And I’m going to give you an example. So, for example, in Theme F of the framework on social, economic and cultural rights, which is in Category R, the rights, the R of the R-O-A-M-X, the first question is, does the national strategy for digital development address economic, social and cultural aspects of digital rights? And then there are indicators. One of the indicators is evidence of inclusion. Now, you can apply this question as easily to a policy instrument as you can apply it to the national Internet context. And I think that’s what we found extremely useful. And I think we also learned that in spite of their best efforts to develop this national digital strategy, it tended to be very sort of supply driven. It focused a lot on infrastructure, on planning. It did not focus on rights at all. It overlooked rights. It might have had an emphasis on data protection, but I think aside from that, there wasn’t very much. It didn’t explicitly address multi-stakeholder, even though it used the term multi-stakeholder. Openness was treated in a very narrow way. And when it comes to gender, there was virtually no content on gender, some emphasis on girls and capacity building for women and girls. So I think that was the other learning, that even though people who develop these digital strategies are doing it to the best of their ability and they try and be as inclusive as possible, they tend to overlook R-O-M-A-M-X. And I think that’s the other thing that we found. There was a complementarity. There was a lens that was provided by the Romex framework, which really filled the gaps and connected the dots. You know Romex actually started at a conference called Connecting the Dots. And I think it still plays that role to connect the dots between initiatives aimed at building digital literacy, building access to infrastructure. And then I think, Tadej, the final thing I can share, and maybe we can come back to it, although our time is not that much left, is that what we found is that whereas the R-O-M-X and the principles endure, I think I think they’re very future-proof. The concept of Internet universality was not future-proof. In fact, people don’t really understand, because they do have Internet access. They might not have meaningful Internet access, or they might not have equal Internet access, but I think people found that concept of Internet universality difficult to relate to. But they did not find the principles of rights, openness, accessibility, multi-stakeholder, and the issues covered under cross-cutting difficult to relate to. They even found the concept of an Internet governance forum difficult to understand. We tried to propose the idea of a national Internet governance forum as a way of building more collaboration around the implementation of the national digital strategy, but when they hear the words Internet governance forum, it doesn’t convey to them the idea that it is a forum that actually involves all aspects of digital cooperation and governance. To me, that was a real revelation. Very useful, and I personally think that we have adaptability and utility in the Romex framework and principles that we’re only just beginning to discover.


Tatevik Grigoryan: Thanks so much, Anriette, for these insights. Actually, I know we’re behind the time, and I know you need to leave early, so I hope Chris and Dorcas Muthoni Online will forgive me if I ask you a follow-up question, mindful that you also need to leave early. Speaking of adaptability and looking ahead to WSIS plus 20 and the GDC, could you elaborate on the idea of how Romex can help ensure that the next phase of the global digital cooperation is more inclusive and grounded in human rights and equity, especially in the global south?


Anriette Esterhuysen: I think in a way I’ve answered that already. I think we need to use the framework not just to do these periodic national assessments. I think that’s very powerful. It works very well in a country like Brazil, where you do have an institution like NIC.br, like CETIC, you have the CGI, the Brazil Internet Steering Group, because you can come back and you can reflect and you can fill gaps. But I think it also works well as a planning tool and assessing strategies and the implementation of those strategies, and I think it can also be used at a monitoring and evaluation.


Tatevik Grigoryan: Thanks so much Anriette, I would now turn on our next speaker who is onlline Dr. Dorcas Muthoni, I would like to invite you to speak about your experience as you had a direct impact on digital transformation in Africa through your work. So would you please highlight some of the biggest implementation challenges that you had in turning these policies and strategies, since we’ve been also talking about the strategies, into results on the ground?


Dorcas Muthoni: Okay, thank you. Thank you very much for that question, and also the opportunity to just contribute to this panel. I just want to speak around specifically, you know, digital transformation across gender spaces or the gender digital divide, as well as the small business sectors. that I’ve worked on a lot in the recent past. And let me just say that one of the areas that we have found very, very challenging is just, for example, coming to gender, we want to assess and find out, you know, is there any sex desegregated data that’s available for us to do the analysis in terms of how to understand, you know, how is the penetration, for example, how is access, how are social norms affecting, you know, adoption and inclusivity in terms of digital transformation and what are the disparities in terms of technology adoption. And the truth of the matter is, there’s hardly any data. This is very challenging because then it means that this is one area that we are not really proactive in assessing. And I think now this would again also really impact national strategies. When you go to small businesses, there’s a lot of uptake of technology, especially mobile-driven access, which tends to have a very strong social aspect. But we want to assess productive use of this internet to impact these businesses. And what you find is that you then struggle to find, you know, data that you can rely on. And so what I find really outstanding about this ROMEX framework is that I think if we encourage a lot more national assessments, but also apart from national assessments, sometimes can, you know, take time because, you know, you need to convince policy makers you may not have a well-housed, you know, government department that’s keen on pursuing this kind of research, you know, based on other priority. Encourage other stakeholders to really take up these and. and help us access data that can allow us to have some baselines, some, you know, points of reflection and also encourage people to use that to take on certain actions that begin to change the trajectory because I know we are all very excited about many emerging technologies but what you find is that there’s a lot of people who only hear but they cannot really be part of the productive elements of how these technologies help and I think particularly when it comes to the gender digital divide one of the things we found very challenging is how do we get women in leadership because we want women to come in as users, we want them to embrace technology but when they’re in the technical areas how do we encourage them to go all the way up into leadership roles, into, you know, policy and decision-making roles and how do we support, with reference to data again, how do we support them first to that level because then they form the role models that will inspire other young generations and for some reason you find that a lot of women who succeed then want to go back and do something because they have had very lonely career journey so that’s one of the things that we have found lack of really data that allows us to assess from these kind of perspectives is really one of the biggest gaps. Then when we think about ROMEX, we’re working on something in one of my organizations called AFJ where we really support women growing the technology field and pursuing their careers and we’re very interested in, you know, gender equality and we want to see, you know, the women, you know, take these opportunities and so I found that the ROMEX framework is really a good element. I would really love to, you know, hear more about non-government implementations of assessments or processes because this is very, very, very interesting to my organization in the sense that, you know, we are working on, you know, some, you know, monitoring and evaluation and learning framework on some women in a leadership program and we want to find out, you know, how can we use these kinds of frameworks that have been, you know, worked on from different parts of the world and with a lot of, you know, research into it and we see how this can inform part of the initiatives that we take. And the other thing that’s important, again, there’s been a very big growth in terms of, you know, interest in entrepreneurship. A lot of startups, you know, all over the continent, a lot of, you know, developers going into this space, a lot of interest even from, you know, right from the university people wanting to get into this space. And I think the question is, again, we need to find out how is this actually impacting the growth of really productive technologies that are locally responsive in our continent. And I think this is one of the things that would need to assess and if we have, you know, a reference, a baseline, then this would really be adequate to help people who take initiatives to, you know, support the reduction of the digital divide, whether it’s gender or generally the participation from a highly productive level in terms of software development, whether it’s in open source communities or otherwise, the growth of high-scaling startups in the continent. This could actually, you know, help inform governments who maybe take the initiative to take on these kinds of assessments, but also researchers who just want to establish, you know, what’s going on from different parts of the economy, you know, across the continent. These are some of the comments that I will be able to share at this point and I’m happy to stay and take any questions that come through later.


Tatevik Grigoryan: Thanks so much for your valuable inputs, Dorcas. And indeed, you mentioned data gaps, which is a major issue across all the countries where we’ve implemented the RoMEx. And what we tend to do is turn these gaps into policy recommendations and indeed encourage the data gathering and also its availability. And I’m very actually pleased that Kenya was one of the first countries, along with Brazil as well, to implement the RoMEx indicators and also to do the first follow-up assessment to measure the progress they’ve made. Now I’d like to give the floor to Chris and ask you, Chris, to please speak. You have a really long-standing engagement with the internet government processes. Based on your experience, could you please elaborate on how do you see RoMEx contributing to more concrete, measurable follow-up on WSIS commitments? Thank you.


Chris Buckridge: Thank you, Tawfik, very much for having me here. I feel quite an expert, really, in comparison to many of my other speakers here today who’ve been far more involved in development of the RoMEx principles, in implementation of the assessments. My own experience of it has been a little bit more piecemeal, sort of watching and observing the development of this, dipping in occasionally in events such as this. And most recently that was in an event at EuroDIG, which is the European Dialogue on Internet Governance, one of the national and regional initiatives in the internet governance space. It’s very fitting in ways that this first session we have today of the IGF 2025, even if it’s perhaps meant a few less people in the room, it’s a good opportunity and time for us to consider the Romex principles, consider this project and how it fits into the broader Internet governance space because I think it is a really important practical development here. And I’m going back to some comments or a phrase that Mr. Jelassi used in his comments at the beginning, data-driven inclusive governance. I think this year, as we’re heading into the WSIS plus 20 years, we’re very focused on how Internet governance, how digital governance is evolving. That idea of data-driven inclusive governance is really important because those two concepts are very mutually supporting of each other. Data-driven, it cannot be comprehensive unless it’s inclusive, unless it’s drawing in all aspects of the community. But at the same time, inclusive governance can’t be effective, can’t be practical unless it is data-driven, unless it’s grounded in the kind of practical knowledge and awareness that a Romex assessment can provide. So I think Romex principles, as we look to the evolution of Internet governance, as we look to making practical output-focused implementations of Internet governance, this is a really important example that can be leveraged and can be developed and utilized by the whole community. I think in that sense, what I would see as an important discussion in the context of the Internet Governance Forum, in the context of its wider network of NRIs, International and Regional Initiatives is how that can all work together, how it can be complementary. I think the examples that Fabio spoke of in Brazil are really important, that sort of utilisation of NIC.br, CGI.br as a multi-stakeholder element of the assessment process. The Romex assessment process always includes that multi-stakeholder advisory committee and many countries won’t have that situation that Brazil very luckily had of having a pre-existing institution that could serve that function. But I think that in itself is a real opportunity because we can see there are two possibilities here. There is the possibility of a Romex assessment being initiated and actually using or working closely with an existing national or regional initiative to provide and foster that multi-stakeholder input. But on the other hand, if there is not a pre-existing national or regional initiative, a Romex assessment and its multi-stakeholder advisory committee could be a really useful catalyst for developing that kind of sustainable, ongoing multi-stakeholder engagement by the community. And that’s going back to a bit what Anriette was saying. Both Romex assessment can be a one-off or can be a recurring tool, but it can also be a method for generating and fostering sustainable multi-stakeholder engagement in these digital governance processes, in these digital governance understanding and development. So I think the opportunities for complementarity between Romex and everything else that is developing and going on in the Internet governance space is really important. And I think it’s one reason why it’s so good to be talking about it here at the Internet Governance Forum.


Tatevik Grigoryan: Thanks so much, Chris, and thank you so much for pointing out to collaboration and work with national and regional initiatives, IGFs. We indeed call on the national and regional initiatives and we stand ready to work with them to advance and to unroll the assessments. of Romex at their local context. I think, mindful of time, I wanted now to open the floor to the audience, both online and in the room. If you have any questions to any of the panelists, any reactions or feedback, I would be very interested to hear from Guy Berger, as the father of the Romex, as I mentioned. I would be delighted if you could start the interventions from the audience, please. You have to go to the mic.


Guy Berger: Thank you. Hello? Yes, we can hear you. We can hear you, it’s okay. Yes. Thank you so much for the presentation, and wonderful to see this system evolving and being the subject of a panel like this. So, it just struck me that for some people, in the dazzle of AI, they may think that the term Internet universality is quaint and old-fashioned, but actually, of course, we would not have AI, we would not have data in AI if we did not have connectivity. And the important thing, I think, about this term Internet universality is that it sensitizes us, as was said, that many people don’t have connectivity, and that impoverishes everybody. But second of all, that connectivity is about not just people having access to content and services, but people having access to produce content and services. And so, if we really want to see a world with many more alternatives to the big digital… players, if we want to see much more content in local languages, then we’ve got to put this emphasis on internet universality because it is the foundation for everything else that’s happening in the digital world. And so I think that this tool, these internet universality indicators, RoamX, is a really valuable way for a country to take stock of where the gaps are in terms of actually enabling their society as a whole to have equitable opportunities to become producers and creators in the digital economy and to contribute to the global tech stack. And at the moment, we don’t have that. We’ve got too many big dominant players and much too little participation reflecting the ground-up possibilities that humanity could have from these technologies. So I really commend these indicators as a way to produce an evidence base for progress that can really unleash a lot more participation. Because if we don’t have universality of the internet, all this other stuff is just going to be of limited benefit. Thank you.


Tatevik Grigoryan: Thank you so much, Guy. Etienne, would you like to react, please?


Tawfik Jelassi: Yes, I would like to follow up on what Guy Berger just said. Guy just mentioned the importance of having digital infrastructure and connectivity in order to create content and services. And I would like to add a third pillar, if I may, which is the digital literacy, which is the capacity building and the capacity development for people to leverage the digital infrastructure towards creating content and services. I think these are three what I would call critical success factors to ensure this internet universality and meaningful connectivity. and here I want to refer to an international conference that UNESCO organized a couple of weeks ago which is on capacity building in the fields of AI and digital transformation for the public sector. So again the emphasis on the capacity building because our studies, our surveys show that in order to bridge the gap we need really to have this wide capacity building, digital literacy in this new digital age and AI era. Otherwise people cannot be, we cannot have this inclusive information society or this inclusivity that was mentioned earlier. So I think the digital skills and capacity building is a third key pillar. I wanted to add what Guy rightly said. Thank you.


Tatevik Grigoryan: Thanks so much ADG. Anriette, did you want to react or do we take questions?


Anriette Esterhuysen: If there’s another question I’d rather take the other question. Otherwise I’ll react.


Tatevik Grigoryan: Are there any questions in the audience? No? Any questions online? I don’t see any. Anriette, you can go ahead please.


Anriette Esterhuysen: So my reaction then is really just, I think Guy, the concept of internet universality is just, as I said, I think digital inclusion is a more meaningful concept for people. I think internet universality is just harder for people to relate to. So that’s just a reflection but I agree with everything else you’ve said. And then in response to what Taufik said about digital literacy, I think the capacity development is absolutely essential. But I think here the Romex framework is actually quite useful to assess how digital literacy programs are designed, developed and implemented. Because so many digital literacy programs are vendor driven or actually just teach people how to use their devices. They’re not linked with rights education, citizen civic education as it’s called, it’s not really enabling people to fully understand the complexity of the social media environment. And I think even just using the ROMEX frameworks, diversity, gender issues, to assess a digital literacy program is going to produce a digital literacy program. So you’re absolutely right, but we have to also be realistic about the fact that so many digital literacy programs are themselves not connecting the dots.


Tatevik Grigoryan: Good. Thanks so much, Anriette. Are there any further comments or questions, whether online or in the room? I don’t see any, so thank you so much. I would then now like to give one minute to each panelist to give their final reflections, anything you wanted to say. We’ll start from Fabio, please, Fabio.


Fabio Senne: Thank you, Tawfik. Well, no, just to stress a few more practical results that we can see in this process. I think one of them is that multistakeholder engagement is not good in terms of the process itself, but also in terms of the quality of the data you can gather. So this is something very interesting that we saw in this last implementation of the model. Many sources of information coming from the civil society, from the private sector that are more or less hidden in official documentation. So this is very key for the process. And a second thing that was already mentioned is the need for data disaggregation and to really understand the topic. So, for instance, in gendered gaps in Brazil, if you take just the main picture of access, basic access, you don’t see. huge gaps in terms of access. But when it goes to meaningful connectivity in a deeper analysis, you can see very huge gaps. So breaking the data indicators into more disaggregation, I think this is something that ROMACS indicators can do for really not just giving a ranking or who is better and which country is better, but also to give a roadmap for action. I think this is the main characteristics of ROMACS indicators. Thank you.


Tatevik Grigoryan: Thanks so much Fabio and thank you for pointing out the issue of the ranking. I think what countries have been valuing a lot is that ROMACS indeed doesn’t do any ranking or comparison and it’s a fully voluntary assessment aimed at guiding and helping the country. So this is something very important to point out which has been appreciated by all the stakeholders. Chris, would you like to go next?


Chris Buckridge: Sure, thank you Tawfik. I’ll be brief here. I know we’re wrapping up. I’ll use my time just to agree very strongly with Guy’s point about the link between internet universality and so much else of our digital society. I think that’s a very live and active discussion at the moment as we’re looking at the Internet Governance Forum. As Anriette said, an Internet Governance Forum doesn’t necessarily capture for many people the full breadth of what our digital society now means, but I think the ROMACS framework does a really good job of highlighting and reinforcing how interlinked and interreliant all of these aspects are. So really important.


Tatevik Grigoryan: Thanks so much. Dorcas, would you like to have a concluding one-minute remarks?


Dorcas Muthoni: Thank you. Yeah, I would just like to say that. I agree with the input that disaggregated data that allows us to pick up different perspectives. For example, gender equality across different areas of assessment would be really important because without that kind of information then initiatives will tend to be a bit general and that could actually continue to then encourage persistence for example in the gender digital divide which really even as we were starting the forum we it was very clear that this is seen to be one of the areas that we persistently struggle with across the board and I think for me I look at it not just from usage and adoption and access but also ability for women to participate in the production and creation of technologies and be decision makers and policy makers this again is one of the other things that we need to look at because then it informs how much we will inspire other generations to come into these areas because it’s a big gap we struggle you know in terms of being the only one in the room or no woman in the room when it comes to a lot of these opportunities so that’s very important and the other thing I could say is that the sustainability you know when we get this moving how well will we be able to sustain and I think that’s the purpose of having regular assessment is also very important because then we know what we have achieved are we keeping up or are we retarding or you know what’s going on this is really important because we cannot move this world backwards we are only going forward so if we get to know what’s happening today and the actions that are being taken in terms of the policy interventions then we can see the effectiveness of policy so that’s my input I’m trying to connect thank you.


Tatevik Grigoryan: Thank you so much Dorcas and also very much for pointing out the gender digital divide which is one of the key issues that we’re also trying to address and close this gap of course with support and collaboration for with all actors. Ariette.


Anriette Esterhuysen: Thanks Tativic. I mean I started off by saying that effective implementation of a national digital transformation strategy needs consultation and collaboration and connecting or connections and I think that for me is this Anriette Esterhuysen, Anriette Esterhuysen, Alexandre Barbosa, Dorcas Muthoni we are going to have more impact and it will be more inclusive. I’m also very excited by the idea of the national and regional IGFs beginning to explore how they can use the Internet Universality ROAMx framework.


Tatevik Grigoryan: Thanks so much, Anriette. Thank you so much to all the panelists. Before I give the floor to ADG Jelassi for closing the session I wanted to thank each one of you, Dorcas Online, David and Camilla Online, Fabio, Chris and Anriette and Guy for your valuable contributions. I don’t cease to learn every time we have a discussion around ROAMx. I am really excited to see the report on Brazil. Thank you so much for your long-standing support to ROAMx and thank you so much for all the wonderful ideas and calls which we will take forward and take into stock for consideration and action when we carry forward with the ROAMx implementation. Thank you so much again. And ADG, would you like to give concluding remarks to close the session?


Tawfik Jelassi: Thank you, Tatevika. I’ll be very brief. First of all, I would like to thank all the participants online but also in the room who came here for this relatively early morning session on day zero of the IGF. Clearly, you have shown commitment, engagement and interest in the subject matter we focused on during this session. I would like to thank also the panelists for sharing with us their expert insights but also the practical country experiences. Tawfik Jelassi, Anriette Esterhuysen, Alexandre Barbosa, Dorcas Muthoni I think ultimately, as many of the speakers, including Guy Berger’s remarks, it’s all about digital inclusion, and in the United Nations we have an expression that we use quite often, digital inclusion has to leave no one behind. This is very important, it’s at the heart of the Rome Acts, and it’s along these three pillars which I mentioned, and were mentioned obviously by the speakers, digital connectivity, digital literacy skills, and digital services and content. Stay tuned, if you would like to take this discussion further, feel free to contact us at UNESCO, or one of the panelists featured in this session, and enjoy IGF for the days ahead.


Anriette Esterhuysen: Thanks ADG.


T

Tawfik Jelassi

Speech speed

118 words per minute

Speech length

1037 words

Speech time

526 seconds

ROMEX stands for Rights, Openness, Accessibility, Multi-stakeholder participation, with X representing cross-cutting issues like sustainability and gender equality

Explanation

Jelassi explains the acronym ROMEX, where R stands for Human Rights, O for Openness, A for Accessibility, M for Multi-Stakeholder Participation, and X refers to cross-cutting issues such as sustainability, gender equality and online safety. This framework provides a comprehensive approach to assessing digital ecosystems.


Evidence

The revised indicators include new dimensions such as AI governance, environmental impact, privacy and meaningful connectivity aligning the framework with global milestones such as the Net Mondial Plus 10 and the Global Digital Compact


Major discussion point

ROMEX Framework Overview and Purpose


Topics

Development | Human rights | Legal and regulatory


Agreed with

– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Agreed on

ROMEX framework provides comprehensive assessment methodology for digital development


ROMEX serves as a strategic enabler for national digital assessments and evidence-based policymaking

Explanation

Jelassi argues that ROMEX provides the right tools to identify gaps, guide reforms and measure progress in digital development. The framework supports evidence-based policymaking by helping countries assess their digital needs and ecosystems.


Evidence

Since its initial launch in 2018, and with the second-generation indicators released last year, ROMEX has become a strategic enabler for national digital assessments


Major discussion point

ROMEX Framework Overview and Purpose


Topics

Development | Legal and regulatory


Agreed with

– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Agreed on

ROMEX framework provides comprehensive assessment methodology for digital development


The framework has been applied in over 40 countries with concrete policy outcomes

Explanation

Jelassi demonstrates the practical impact of ROMEX by citing its widespread adoption and concrete results. The framework has moved beyond theory to produce tangible policy changes in multiple countries.


Evidence

In Argentina, the National Digital Assessment informed legislation to reform data protection laws. In Paraguay, the National Statistics Office began collecting disaggregated digital data. Countries like Brazil and Uzbekistan have begun pilot implementation of second-generation indicators


Major discussion point

ROMEX Framework Overview and Purpose


Topics

Development | Legal and regulatory | Human rights


Agreed with

– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Agreed on

Multi-stakeholder engagement is essential for effective digital governance and policy implementation


2.6 billion people remain offline globally, with only 27% of low-income country populations using internet compared to 93% in high-income countries

Explanation

Jelassi highlights the persistent global digital divide by presenting stark statistics about internet access disparities. This data underscores the urgent need for frameworks like ROMEX to address digital inequalities.


Evidence

2.6 billion people remain offline as of today, most of them in the least developed regions. In low-income countries, only 27% of the population uses the Internet, compared to 93% in high-income countries


Major discussion point

Digital Divide and Inclusion Challenges


Topics

Development | Digital access


Agreed with

– Fabio Senne
– Anriette Esterhuysen
– Dorcas Muthoni

Agreed on

Persistent digital divides require targeted interventions, especially for marginalized groups


Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity

Explanation

Jelassi argues that having digital infrastructure and connectivity alone is insufficient for meaningful digital participation. He emphasizes that digital literacy and capacity building form a third critical pillar necessary for people to effectively leverage digital infrastructure.


Evidence

UNESCO organized an international conference on capacity building in AI and digital transformation for the public sector. Studies show that bridging the gap requires wide capacity building and digital literacy in the new digital age and AI era


Major discussion point

Internet Universality and Future Digital Cooperation


Topics

Development | Capacity development | Sociocultural


Agreed with

– Anriette Esterhuysen
– Guy Berger

Agreed on

Digital literacy and capacity building are fundamental requirements for meaningful digital participation


D

Davide Storti

Speech speed

94 words per minute

Speech length

512 words

Speech time

323 seconds

ROMEX translates WSIS ideals into measurable outcomes and provides common language for stakeholders

Explanation

Storti explains how ROMEX bridges the gap between the foundational aspirations of WSIS (like access, inclusion, rights) and practical measurement. The framework enables evidence-based results and collaboration among different WSIS stakeholders by providing a shared framework for assessment.


Evidence

The framework helps measure advancement of technologies like Artificial Intelligence, impact of digitalization, status of indicators like gender equality or rights online, and measurement of data protection, trust in media, and misinformation


Major discussion point

ROMEX Framework Overview and Purpose


Topics

Development | Legal and regulatory | Human rights


Agreed with

– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Agreed on

ROMEX framework provides comprehensive assessment methodology for digital development


F

Fabio Senne

Speech speed

118 words per minute

Speech length

1311 words

Speech time

662 seconds

Brazil was the first country to pilot ROMEX in 2018 and has now implemented the revised second-generation indicators

Explanation

Senne describes Brazil’s pioneering role in ROMEX implementation, from being the first pilot country to now implementing the updated framework. This demonstrates Brazil’s continued commitment to the ROMEX methodology and its evolution.


Evidence

Brazil was involved in the discussion of the framework with multi-stakeholder consultations. In 2019, they launched the first assessment report at IGF Berlin. From 2023-2024, Brazil supported UNESCO in revising the indicators


Major discussion point

ROMEX Implementation and Country Experiences


Topics

Development | Legal and regulatory


Brazil shows advances in digital public services but persistent inequalities in access, especially for marginalized groups

Explanation

Senne presents a nuanced view of Brazil’s digital progress, acknowledging significant improvements in government digital services while highlighting ongoing disparities. The assessment reveals that gains are not equally distributed across different population groups.


Evidence

The platform gov.br offers 4,500 services online with over 160 million users. However, significant inequalities persist in access to digital services, especially among populations with low digital literacy, limited connectivity or disabilities


Major discussion point

ROMEX Implementation and Country Experiences


Topics

Development | Digital access | Human rights


Gender and racial disparities persist, with black women in Brazil showing lower levels of meaningful connectivity

Explanation

Senne’s analysis reveals intersectional digital inequalities in Brazil, where race and gender compound to create particularly disadvantaged groups. This finding demonstrates the importance of disaggregated data analysis in understanding digital divides.


Evidence

Black women present lower levels of meaningful connectivity over time, exacerbated by digital skill gaps and mobile-only access to this strata of the population


Major discussion point

Digital Divide and Inclusion Challenges


Topics

Human rights | Gender rights online | Development


Agreed with

– Tawfik Jelassi
– Anriette Esterhuysen
– Dorcas Muthoni

Agreed on

Persistent digital divides require targeted interventions, especially for marginalized groups


Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources

Explanation

Senne argues that involving multiple stakeholders in the ROMEX assessment process enhances the quality and comprehensiveness of data collection. This approach reveals information that might be hidden in official documentation alone.


Evidence

Many sources of information coming from civil society and private sector are more or less hidden in official documentation. The multi-stakeholder advisory committee with CGI.br helps validate the process


Major discussion point

ROMEX Implementation and Country Experiences


Topics

Development | Legal and regulatory


Agreed with

– Tawfik Jelassi
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Agreed on

Multi-stakeholder engagement is essential for effective digital governance and policy implementation


A

Anriette Esterhuysen

Speech speed

151 words per minute

Speech length

1397 words

Speech time

552 seconds

Fiji’s capacity building workshop revealed that government departments were unaware of their own national digital strategy despite consultation efforts

Explanation

Esterhuysen describes a significant discovery during Fiji’s ROMEX workshop: despite an eight-month consultation process, two-thirds of government departments had no knowledge of the national digital strategy. This highlights the complexity of intergovernmental collaboration and the disconnect between strategy development and implementation.


Evidence

About eight different ministries did not know about the national digital strategy, even after the developers believed they had consulted thoroughly. This showed the disconnect between strategy developers and implementers


Major discussion point

ROMEX Implementation and Country Experiences


Topics

Development | Legal and regulatory


Agreed with

– Tawfik Jelassi
– Fabio Senne
– Chris Buckridge
– Tatevik Grigoryan

Agreed on

Multi-stakeholder engagement is essential for effective digital governance and policy implementation


ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation

Explanation

Esterhuysen argues that ROMEX’s utility extends far beyond periodic assessments to encompass the entire policy lifecycle. The framework can serve as a planning tool, strategy assessment tool, and monitoring/evaluation instrument, making it highly versatile for policy work.


Evidence

The ROMEX framework is suited to the full lifecycle of policy development and implementation from design to monitoring learning and evaluation. It works as well as a planning tool and in assessing design of strategies


Major discussion point

ROMEX as a Comprehensive Policy Tool


Topics

Development | Legal and regulatory


Agreed with

– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Chris Buckridge
– Tatevik Grigoryan

Agreed on

ROMEX framework provides comprehensive assessment methodology for digital development


The framework can assess policy instruments and national digital strategies, not just internet environments

Explanation

Esterhuysen discovered that ROMEX’s questions and indicators can be applied directly to evaluate policy documents and strategies, not just national internet contexts. This expands the framework’s applicability significantly beyond its original scope.


Evidence

For example, in Theme F on social, economic and cultural rights, the question ‘does the national strategy for digital development address economic, social and cultural aspects of digital rights?’ can be applied to policy instruments as easily as to national Internet context


Major discussion point

ROMEX as a Comprehensive Policy Tool


Topics

Development | Human rights | Legal and regulatory


ROMEX provides a lens that fills gaps in digital strategies, which often overlook rights, gender, and multi-stakeholder approaches

Explanation

Esterhuysen found that even well-intentioned digital strategies tend to be supply-driven and focus primarily on infrastructure while neglecting crucial elements. ROMEX serves as a complementary lens that identifies and addresses these systematic gaps.


Evidence

Fiji’s national digital strategy was supply-driven, focused on infrastructure and planning, did not focus on rights at all, overlooked multi-stakeholder approaches, treated openness narrowly, and had virtually no content on gender


Major discussion point

ROMEX as a Comprehensive Policy Tool


Topics

Human rights | Gender rights online | Development


Agreed with

– Tawfik Jelassi
– Fabio Senne
– Dorcas Muthoni

Agreed on

Persistent digital divides require targeted interventions, especially for marginalized groups


Digital literacy programs need to connect rights education and civic education, not just device usage training

Explanation

Esterhuysen argues that many digital literacy programs are inadequate because they focus only on technical skills rather than comprehensive digital citizenship. She advocates for programs that integrate rights awareness and civic education to help people understand the complexity of digital environments.


Evidence

Many digital literacy programs are vendor driven or just teach people how to use devices. They’re not linked with rights education or civic education, and don’t enable people to understand the complexity of social media environments


Major discussion point

Internet Universality and Future Digital Cooperation


Topics

Sociocultural | Online education | Human rights


Agreed with

– Tawfik Jelassi
– Guy Berger

Agreed on

Digital literacy and capacity building are fundamental requirements for meaningful digital participation


Disagreed with

– Guy Berger

Disagreed on

Terminology preference for Internet Universality vs Digital Inclusion


D

Dorcas Muthoni

Speech speed

149 words per minute

Speech length

1306 words

Speech time

525 seconds

Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities

Explanation

Muthoni identifies a critical data gap that hampers efforts to understand and address gender inequalities in digital access and adoption. Without proper disaggregated data, it becomes challenging to develop targeted interventions or measure progress in closing gender digital divides.


Evidence

When assessing gender digital divide, penetration, access, how social norms affect adoption and inclusivity, and disparities in technology adoption, there’s hardly any data available for analysis


Major discussion point

Digital Divide and Inclusion Challenges


Topics

Human rights | Gender rights online | Development


Agreed with

– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen

Agreed on

Persistent digital divides require targeted interventions, especially for marginalized groups


Women face challenges progressing to leadership roles in technology, creating lonely career journeys and limiting role models

Explanation

Muthoni describes systemic barriers that prevent women from advancing to leadership positions in technology sectors. This creates a cycle where the lack of female role models discourages other women from pursuing or persisting in technology careers.


Evidence

Women who succeed in technology want to give back because they have had very lonely career journeys. There’s a need to support women to reach leadership roles in technical areas, policy and decision-making roles to form role models for young generations


Major discussion point

Digital Divide and Inclusion Challenges


Topics

Human rights | Gender rights online | Economic


C

Chris Buckridge

Speech speed

132 words per minute

Speech length

731 words

Speech time

330 seconds

Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective

Explanation

Buckridge argues that data-driven and inclusive governance are mutually reinforcing concepts. Effective governance cannot be truly data-driven without inclusive participation, and inclusive governance cannot be practical without being grounded in comprehensive data and evidence.


Evidence

Data-driven governance cannot be comprehensive unless it’s inclusive, drawing in all aspects of the community. Inclusive governance can’t be effective unless it is data-driven and grounded in practical knowledge that a ROMEX assessment can provide


Major discussion point

ROMEX as a Comprehensive Policy Tool


Topics

Development | Legal and regulatory


Agreed with

– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Tatevik Grigoryan

Agreed on

ROMEX framework provides comprehensive assessment methodology for digital development


ROMEX can foster sustainable multi-stakeholder engagement and complement national/regional internet governance initiatives

Explanation

Buckridge sees ROMEX as both benefiting from and contributing to multi-stakeholder governance structures. The framework can work with existing initiatives like national IGFs, or help catalyze new multi-stakeholder engagement where none exists.


Evidence

ROMEX assessment can work with existing national/regional initiatives to provide multi-stakeholder input, or if no pre-existing initiative exists, it can be a catalyst for developing sustainable multi-stakeholder engagement in digital governance processes


Major discussion point

Internet Universality and Future Digital Cooperation


Topics

Development | Legal and regulatory


Agreed with

– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Tatevik Grigoryan

Agreed on

Multi-stakeholder engagement is essential for effective digital governance and policy implementation


G

Guy Berger

Speech speed

130 words per minute

Speech length

332 words

Speech time

152 seconds

Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services

Explanation

Berger argues that despite the excitement around AI and new technologies, internet universality remains crucial as the foundation that enables all other digital developments. He emphasizes that true universality means people can both consume and create digital content and services.


Evidence

We would not have AI or data in AI without connectivity. Internet universality enables people to have access not just to content and services, but to produce content and services, contributing to alternatives to big digital players and content in local languages


Major discussion point

Internet Universality and Future Digital Cooperation


Topics

Development | Infrastructure | Sociocultural


Agreed with

– Tawfik Jelassi
– Anriette Esterhuysen

Agreed on

Digital literacy and capacity building are fundamental requirements for meaningful digital participation


Disagreed with

– Anriette Esterhuysen

Disagreed on

Terminology preference for Internet Universality vs Digital Inclusion


T

Tatevik Grigoryan

Speech speed

117 words per minute

Speech length

1933 words

Speech time

984 seconds

ROMEX demonstrates its value through successful implementation in multiple countries including Brazil and Uzbekistan with revised indicators

Explanation

Grigoryan emphasizes that ROMEX has proven its effectiveness through practical applications across different countries. The framework has evolved with revised indicators that are being piloted in Brazil and Uzbekistan, showing its adaptability and continued relevance.


Evidence

Brazil and Uzbekistan have begun the pilot implementation of revised indicators, and the session focuses on demonstrating how ROMEX has been introduced in these countries


Major discussion point

ROMEX Implementation and Country Experiences


Topics

Development | Legal and regulatory


ROMEX serves as a framework for assessing progress on WSIS commitments and SDGs through evidence-based policy making

Explanation

Grigoryan positions ROMEX as a tool that can measure and evaluate progress toward international commitments like WSIS and Sustainable Development Goals. The framework provides evidence-based foundations for policy decisions and progress tracking.


Evidence

The session focuses on integrating the relevance of ROMEX framework in assessing the progress on WSIS commitments and the SDGs


Major discussion point

ROMEX Framework Overview and Purpose


Topics

Development | Legal and regulatory


Agreed with

– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge

Agreed on

ROMEX framework provides comprehensive assessment methodology for digital development


ROMEX capacity building workshops support multi-stakeholder advisory boards and wider stakeholder communities in policy implementation

Explanation

Grigoryan describes a new intervention approach where ROMEX assessments are followed by capacity building workshops. These workshops help stakeholders implement recommendations and use assessment findings as evidence for digital policy making and implementation.


Evidence

A capacity building workshop took place in Fiji to support the multi-stakeholder advisory board and wider stakeholder community in implementing recommendations focusing on digital policy making, policy implementation, and capacity building


Major discussion point

ROMEX as a Comprehensive Policy Tool


Topics

Development | Capacity development


Agreed with

– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge

Agreed on

Multi-stakeholder engagement is essential for effective digital governance and policy implementation


ROMEX provides a voluntary assessment approach that avoids ranking or comparison between countries

Explanation

Grigoryan emphasizes that ROMEX is designed as a supportive tool rather than a competitive assessment mechanism. Countries appreciate that the framework focuses on guidance and assistance rather than creating hierarchies or comparisons between nations.


Evidence

ROMEX doesn’t do any ranking or comparison and it’s a fully voluntary assessment aimed at guiding and helping the country, which has been appreciated by all stakeholders


Major discussion point

ROMEX Framework Overview and Purpose


Topics

Development | Legal and regulatory


ROMEX stands ready to collaborate with national and regional IGF initiatives to advance local assessments

Explanation

Grigoryan calls for collaboration between ROMEX and existing governance structures like national and regional Internet Governance Forums. This partnership approach aims to leverage existing multi-stakeholder mechanisms to implement ROMEX assessments at local levels.


Evidence

UNESCO calls on national and regional initiatives and IGFs and stands ready to work with them to advance and unroll the assessments of ROMEX at their local context


Major discussion point

Internet Universality and Future Digital Cooperation


Topics

Development | Legal and regulatory


Agreements

Agreement points

ROMEX framework provides comprehensive assessment methodology for digital development

Speakers

– Tawfik Jelassi
– Davide Storti
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Arguments

ROMEX stands for Rights, Openness, Accessibility, Multi-stakeholder participation, with X representing cross-cutting issues like sustainability and gender equality


ROMEX serves as a strategic enabler for national digital assessments and evidence-based policymaking


ROMEX translates WSIS ideals into measurable outcomes and provides common language for stakeholders


ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation


Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective


ROMEX serves as a framework for assessing progress on WSIS commitments and SDGs through evidence-based policy making


Summary

All speakers agree that ROMEX provides a valuable, comprehensive framework for assessing digital development that encompasses rights, openness, accessibility, and multi-stakeholder participation while serving multiple purposes from assessment to policy planning


Topics

Development | Legal and regulatory | Human rights


Multi-stakeholder engagement is essential for effective digital governance and policy implementation

Speakers

– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Arguments

The framework has been applied in over 40 countries with concrete policy outcomes


Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources


Fiji’s capacity building workshop revealed that government departments were unaware of their own national digital strategy despite consultation efforts


ROMEX can foster sustainable multi-stakeholder engagement and complement national/regional internet governance initiatives


ROMEX capacity building workshops support multi-stakeholder advisory boards and wider stakeholder communities in policy implementation


Summary

Speakers consistently emphasize that meaningful multi-stakeholder participation is crucial for successful digital policy development and implementation, with ROMEX serving as a tool to facilitate this engagement


Topics

Development | Legal and regulatory


Persistent digital divides require targeted interventions, especially for marginalized groups

Speakers

– Tawfik Jelassi
– Fabio Senne
– Anriette Esterhuysen
– Dorcas Muthoni

Arguments

2.6 billion people remain offline globally, with only 27% of low-income country populations using internet compared to 93% in high-income countries


Gender and racial disparities persist, with black women in Brazil showing lower levels of meaningful connectivity


ROMEX provides a lens that fills gaps in digital strategies, which often overlook rights, gender, and multi-stakeholder approaches


Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities


Summary

All speakers acknowledge that significant digital inequalities persist, particularly affecting women, racial minorities, and people in low-income regions, requiring evidence-based approaches to address these gaps


Topics

Development | Human rights | Gender rights online | Digital access


Digital literacy and capacity building are fundamental requirements for meaningful digital participation

Speakers

– Tawfik Jelassi
– Anriette Esterhuysen
– Guy Berger

Arguments

Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity


Digital literacy programs need to connect rights education and civic education, not just device usage training


Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services


Summary

Speakers agree that technical access alone is insufficient and that comprehensive digital literacy, including rights awareness and civic education, is essential for people to meaningfully participate in digital society


Topics

Development | Sociocultural | Online education | Human rights


Similar viewpoints

Both speakers emphasize the critical importance of comprehensive, disaggregated data collection that includes multiple stakeholder perspectives to understand and address digital inequalities effectively

Speakers

– Fabio Senne
– Dorcas Muthoni

Arguments

Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources


Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities


Topics

Development | Human rights | Gender rights online


Both speakers view ROMEX as a comprehensive tool that can support various stages of policy work while emphasizing the interconnected nature of data-driven and inclusive approaches to governance

Speakers

– Anriette Esterhuysen
– Chris Buckridge

Arguments

ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation


Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective


Topics

Development | Legal and regulatory


Both speakers emphasize that internet universality and meaningful connectivity require more than just technical infrastructure – they need comprehensive capacity building to enable people to be both consumers and creators in the digital economy

Speakers

– Tawfik Jelassi
– Guy Berger

Arguments

Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity


Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services


Topics

Development | Infrastructure | Sociocultural


Unexpected consensus

ROMEX as a policy planning and evaluation tool beyond assessment

Speakers

– Anriette Esterhuysen
– Chris Buckridge
– Tatevik Grigoryan

Arguments

The framework can assess policy instruments and national digital strategies, not just internet environments


ROMEX can foster sustainable multi-stakeholder engagement and complement national/regional internet governance initiatives


ROMEX capacity building workshops support multi-stakeholder advisory boards and wider stakeholder communities in policy implementation


Explanation

While ROMEX was originally conceived as an assessment framework, speakers discovered unexpected consensus around its utility as a comprehensive policy tool that can be used for planning, strategy evaluation, and ongoing governance processes, expanding its application beyond periodic assessments


Topics

Development | Legal and regulatory


The concept of ‘Internet universality’ may be outdated while ROMEX principles remain relevant

Speakers

– Anriette Esterhuysen
– Guy Berger

Arguments

ROMEX provides a lens that fills gaps in digital strategies, which often overlook rights, gender, and multi-stakeholder approaches


Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services


Explanation

There was unexpected consensus that while the term ‘Internet universality’ may be difficult for people to relate to in the AI era, the underlying ROMEX principles remain highly relevant and future-proof, suggesting a need to evolve terminology while maintaining core concepts


Topics

Development | Sociocultural


Overall assessment

Summary

The speakers demonstrated remarkably high consensus across all major aspects of ROMEX implementation and digital governance. Key areas of agreement include: the comprehensive value of the ROMEX framework for digital assessment and policy work; the critical importance of multi-stakeholder engagement; the persistence of digital divides requiring targeted interventions; and the need for holistic approaches to digital literacy and capacity building.


Consensus level

Very high consensus with no significant disagreements identified. The speakers built upon each other’s points constructively, with practical implementers (Brazil, Fiji, Kenya) validating the theoretical framework presented by UNESCO officials. This strong consensus suggests ROMEX has achieved broad acceptance among diverse stakeholders and demonstrates its practical utility across different contexts. The implications are positive for ROMEX’s continued development and adoption, as the framework appears to have successfully bridged the gap between academic theory and practical implementation needs.


Differences

Different viewpoints

Terminology preference for Internet Universality vs Digital Inclusion

Speakers

– Guy Berger
– Anriette Esterhuysen

Arguments

Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services


Digital literacy programs need to connect rights education and civic education, not just device usage training


Summary

Guy Berger defended the continued relevance of ‘Internet universality’ as a foundational concept, while Anriette Esterhuysen suggested that ‘digital inclusion’ is a more meaningful and relatable concept for people to understand


Topics

Development | Sociocultural


Unexpected differences

Overall assessment

Summary

The discussion showed remarkable consensus among speakers with only minor terminological preferences and approaches to implementation differing


Disagreement level

Very low level of disagreement. The speakers were largely aligned on the value and importance of the ROMEX framework, the challenges of digital divides, and the need for inclusive digital governance. The only notable disagreement was a terminological preference between ‘Internet universality’ and ‘digital inclusion,’ which does not affect the substantive policy recommendations. This high level of consensus suggests strong foundational agreement on the framework’s value and approach, which bodes well for its continued development and implementation.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of comprehensive, disaggregated data collection that includes multiple stakeholder perspectives to understand and address digital inequalities effectively

Speakers

– Fabio Senne
– Dorcas Muthoni

Arguments

Multi-stakeholder engagement improves data quality by accessing information from civil society and private sector sources


Lack of sex-disaggregated data makes it difficult to assess gender digital divide and technology adoption disparities


Topics

Development | Human rights | Gender rights online


Both speakers view ROMEX as a comprehensive tool that can support various stages of policy work while emphasizing the interconnected nature of data-driven and inclusive approaches to governance

Speakers

– Anriette Esterhuysen
– Chris Buckridge

Arguments

ROMEX framework works effectively throughout the full lifecycle of policy development, from design to monitoring and evaluation


Data-driven inclusive governance requires both comprehensive data and inclusive participation to be effective


Topics

Development | Legal and regulatory


Both speakers emphasize that internet universality and meaningful connectivity require more than just technical infrastructure – they need comprehensive capacity building to enable people to be both consumers and creators in the digital economy

Speakers

– Tawfik Jelassi
– Guy Berger

Arguments

Digital literacy and capacity building are critical success factors alongside infrastructure and connectivity


Internet universality remains foundational for AI and digital technologies, enabling people to both access and produce content and services


Topics

Development | Infrastructure | Sociocultural


Takeaways

Key takeaways

ROMEX framework (Rights, Openness, Accessibility, Multi-stakeholder participation, plus cross-cutting issues) serves as an effective tool for measuring WSIS implementation and guiding evidence-based digital policymaking


The framework has demonstrated practical value across over 40 countries, with concrete policy outcomes including legislative reforms and improved data collection practices


ROMEX works throughout the full policy lifecycle – from design and planning to implementation, monitoring, and evaluation – not just as a one-time assessment tool


Digital divides persist globally with 2.6 billion people offline, and significant inequalities exist even within countries that have made digital progress, particularly affecting women, racial minorities, and rural populations


Multi-stakeholder engagement is essential for both effective policy implementation and quality data collection, but coordination challenges exist even within government departments


Data disaggregation is crucial for understanding true digital inequalities – surface-level access statistics can mask deeper connectivity and usage gaps


Internet universality remains foundational for emerging technologies like AI, requiring not just access but the ability for people to produce and create digital content and services


Digital literacy programs need comprehensive approaches that include rights education and civic engagement, not just technical device training


Resolutions and action items

UNESCO calls on governments, regulators, civil society, and stakeholders to embrace ROMEX as a strategic tool for digital transformation


Brazil will complete multi-stakeholder validation of their revised ROMEX assessment and launch the final report by September-October


Encourage national and regional Internet Governance Forums to explore using the ROMEX framework for their initiatives


Promote non-governmental implementations of ROMEX assessments to support broader stakeholder engagement


Address data gaps by turning them into policy recommendations and encouraging improved data gathering and availability


Develop collaboration between ROMEX assessments and existing national/regional initiatives to foster sustainable multi-stakeholder engagement


Unresolved issues

Lack of comprehensive sex-disaggregated data across countries makes it difficult to properly assess and address gender digital divides


Environmental impact indicators are largely overlooked in digital policies and governance frameworks


AI governance frameworks remain incomplete in many countries, with crucial aspects like transparency and rights-based safeguards still unresolved


Multi-stakeholder participation remains inconsistent across different government ministries and regulatory environments


The concept of ‘Internet Governance Forum’ is poorly understood by many stakeholders, limiting engagement in digital cooperation processes


Sustainability of ROMEX implementation and regular assessments requires ongoing commitment and resources


Digital literacy programs often remain vendor-driven or device-focused rather than comprehensive rights-based approaches


Suggested compromises

Use ‘digital inclusion’ terminology instead of ‘Internet universality’ as it is more relatable and meaningful to stakeholders


Leverage ROMEX assessments as catalysts for creating multi-stakeholder advisory committees in countries lacking existing institutions


Combine ROMEX framework with national digital strategy development to ensure comprehensive coverage of rights, openness, accessibility, and multi-stakeholder principles


Encourage both governmental and non-governmental implementations of ROMEX to broaden participation and impact


Thought provoking comments

Technology is neither good nor bad, nor is it neutral. The impact of technology is shaped by human intent, by the choices we make, the values that we want to protect, and the systems we design.

Speaker

Tawfik Jelassi


Reason

This quote from historian Melvin Kranzberg reframes the entire discussion by challenging the common assumption that technology is neutral. It emphasizes human agency and responsibility in shaping digital outcomes, which directly supports the need for frameworks like ROMEX that embed human rights and values into digital governance.


Impact

This philosophical foundation set the tone for the entire session, establishing that digital transformation requires intentional, values-based approaches rather than purely technical solutions. It provided the conceptual framework that justified all subsequent discussions about ROMEX as a tool for ensuring technology serves human development.


Even after about an eight-month period of the people developing the national digital strategy believing that they’ve consulted thoroughly, two-thirds of the government departments… did not know about the national digital strategy

Speaker

Anriette Esterhuysen


Reason

This revelation from the Fiji workshop exposed a critical gap between policy development and implementation that goes beyond technical issues to fundamental governance challenges. It highlighted how even well-intentioned consultation processes can fail dramatically.


Impact

This comment shifted the discussion from celebrating ROMEX assessments to acknowledging the complex realities of policy implementation. It led to deeper exploration of how ROMEX could serve not just as an assessment tool but as a bridge between strategy development and actual implementation, emphasizing the need for sustained multi-stakeholder engagement.


The ROMEX framework is not just suited to assessing a national internet environment. It actually works extremely well in assessing a strategy… it could work as well as a planning tool… suited to the full lifecycle of policy development and implementation from design to monitoring learning and evaluation

Speaker

Anriette Esterhuysen


Reason

This insight expanded the conceptual boundaries of ROMEX beyond its original assessment function, revealing its potential as a comprehensive policy tool. It demonstrated how frameworks can evolve beyond their initial design to serve broader purposes.


Impact

This comment fundamentally reframed how participants viewed ROMEX’s utility, moving from seeing it as a periodic assessment tool to understanding it as an integrated policy lifecycle instrument. It opened new avenues for discussion about practical applications and sparked interest from other speakers about implementation possibilities.


The concept of Internet universality was not future-proof… people found that concept of Internet universality difficult to relate to. But they did not find the principles of rights, openness, accessibility, multi-stakeholder, and the issues covered under cross-cutting difficult to relate to

Speaker

Anriette Esterhuysen


Reason

This observation challenged a core UNESCO concept while validating the ROMEX framework itself. It provided crucial feedback about how terminology and framing affect stakeholder engagement and understanding.


Impact

This comment created a moment of tension in the discussion, as it directly challenged UNESCO’s foundational concept. It prompted Guy Berger to defend the importance of ‘Internet universality’ and led to a nuanced exchange about terminology versus substance, ultimately enriching the conversation about effective communication of digital inclusion concepts.


There’s hardly any data… This is very challenging because then it means that this is one area that we are not really proactive in assessing… When you go to small businesses… you then struggle to find data that you can rely on

Speaker

Dorcas Muthoni


Reason

This comment highlighted a fundamental challenge that undermines evidence-based policymaking – the absence of disaggregated data, particularly for gender and small business impacts. It connected the technical framework discussion to real-world implementation barriers.


Impact

This intervention grounded the theoretical discussion in practical realities, leading other speakers to emphasize the importance of data disaggregation. It reinforced the value proposition of ROMEX by highlighting how it can identify and address critical data gaps that policymakers might otherwise overlook.


Data-driven inclusive governance… those two concepts are very mutually supporting of each other. Data-driven, it cannot be comprehensive unless it’s inclusive… But at the same time, inclusive governance can’t be effective, can’t be practical unless it is data-driven

Speaker

Chris Buckridge


Reason

This comment articulated a sophisticated understanding of the symbiotic relationship between evidence-based policy and participatory governance, showing how ROMEX addresses both dimensions simultaneously.


Impact

This insight helped synthesize earlier discussions about multi-stakeholder engagement and evidence-based policy, providing a theoretical framework that connected various speakers’ practical experiences. It elevated the conversation by showing how ROMEX addresses fundamental governance challenges rather than just technical assessment needs.


Overall assessment

These key comments transformed what could have been a routine presentation of ROMEX achievements into a sophisticated exploration of digital governance challenges and solutions. The discussion evolved from initial technical presentations to deeper questions about policy implementation, stakeholder engagement, and the relationship between assessment frameworks and real-world change. Anriette Esterhuysen’s insights particularly drove this evolution, challenging assumptions and expanding the conceptual scope of ROMEX’s utility. The interplay between theoretical frameworks (Jelassi’s technology neutrality quote) and practical realities (Muthoni’s data gaps, Esterhuysen’s Fiji experience) created a rich dialogue that demonstrated both the potential and limitations of current approaches to digital governance. The session successfully moved beyond advocacy for ROMEX to critical examination of how such frameworks can be more effectively integrated into the full spectrum of digital policy development and implementation.


Follow-up questions

How can non-government implementations of ROMEX assessments be conducted and what frameworks exist for this?

Speaker

Dorcas Muthoni


Explanation

She expressed interest in using ROMEX frameworks for monitoring and evaluation in her organization’s women in leadership program, indicating a need for guidance on non-governmental applications


How can National and Regional Internet Governance Forums (NRIs) integrate and utilize the ROMEX framework?

Speaker

Chris Buckridge and Anriette Esterhuysen


Explanation

Both speakers highlighted the potential for collaboration between ROMEX assessments and existing NRIs, with Anriette expressing excitement about NRIs exploring how to use the framework


How can the concept of ‘Internet Governance Forum’ be better communicated to convey its broader scope of digital cooperation and governance?

Speaker

Anriette Esterhuysen


Explanation

She noted that people found the term difficult to understand and didn’t grasp that it involves all aspects of digital cooperation, not just narrow internet governance


How can environmental sustainability indicators be better integrated into digital governance frameworks?

Speaker

Fabio Senne


Explanation

He identified that environmental issues like energy consumption, e-waste and emissions are largely overlooked in digital policies and need better integration


What strategies can address the persistent gender digital divide, particularly in leadership and decision-making roles in technology?

Speaker

Dorcas Muthoni


Explanation

She highlighted the challenge of supporting women to reach leadership positions in technology and the lack of data to assess progress in this area


How can disaggregated data collection be improved to better understand digital inequalities across different demographic groups?

Speaker

Fabio Senne and Dorcas Muthoni


Explanation

Both speakers emphasized the critical need for better disaggregated data to understand gaps in meaningful connectivity, particularly for marginalized groups like black women in Brazil


How can ROMEX be used as a planning and monitoring tool throughout the full lifecycle of policy development, not just for assessment?

Speaker

Anriette Esterhuysen


Explanation

She discovered that ROMEX could work as a planning tool and for monitoring/evaluation, suggesting this application needs further exploration and development


How can multi-stakeholder participation be made more consistent across different government ministries and regulatory environments?

Speaker

Fabio Senne


Explanation

He noted that while Brazil has good multi-stakeholder frameworks, participation remains fragmented across different government departments


How can digital literacy programs be redesigned to connect rights education, civic education, and understanding of complex digital environments?

Speaker

Anriette Esterhuysen


Explanation

She pointed out that many digital literacy programs are vendor-driven or device-focused and don’t address the broader complexity of digital citizenship


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #59 How to Develop Trustworthy Products and Policies

Day 0 Event #59 How to Develop Trustworthy Products and Policies

Session at a glance

Summary

This discussion was a workshop session at IGF 2020 titled “How to Develop Trustworthy Products and Policies,” nicknamed “Project Manager for a Day” by Google. The session was moderated by Jim Prendergast and featured Google speakers Will Carter (AI policy expert) and Nadja Blagojevic (trust manager) who aimed to give participants insight into the role of product managers at Google and the challenges they face when launching products.


Nadja began by explaining that product managers identify problems to solve, develop vision and strategy, create roadmaps, and coordinate with teams including user experience (UX) designers and engineers. She emphasized the importance of iterative design and validation at different fidelity levels, noting that small changes in language and design can significantly impact product adoption. The speakers presented two case studies: AI Overviews, which uses generative AI to provide comprehensive responses to complex search queries with high-quality sources, and About This Image, a tool that helps users understand the context and credibility of images online, including detection of AI-generated content through SynthID watermarking.


Following the presentations, participants broke into groups to brainstorm product ideas focusing on information quality, news credibility, and privacy. The in-person groups developed concepts for flagging AI-generated or false news content in search results, while the online group, led by Hassan Al-Mahmoud from Kuwait’s telecommunications authority, proposed an AI-powered system to automate domain name registration verification using document recognition and validation. All groups emphasized the need for collaboration between engineering, UX, legal teams, and subject matter experts, while considering cultural competency and building user trust. The session highlighted the complex considerations involved in product development, particularly around information quality and trustworthiness in the digital age.


Keypoints

## Major Discussion Points:


– **Product Management at Google**: Overview of how product managers identify problems, develop vision and strategy, create roadmaps, and coordinate with UX designers and engineers to deliver features that solve user needs


– **AI-powered Features and Trust**: Case studies of Google’s AI Overviews and “About This Image” feature, demonstrating how the company approaches building trustworthy AI products with quality controls, source verification, and transparency tools


– **Information Quality and News Credibility**: Multiple breakout groups focused on developing features to help users identify reliable news sources, detect AI-generated content, and provide context about information credibility through visual indicators and fact-checking partnerships


– **Domain Registration Automation**: Presentation of a real-world case study from Kuwait’s domain authority (.kw) exploring how AI tools could streamline government processes for validating commercial entity documentation and domain name registration


– **Cross-sector Collaboration Needs**: Discussion of how addressing online trust and information quality requires partnerships between private companies, government agencies, fact-checking organizations, and civil society groups


## Overall Purpose:


The discussion was designed as an interactive workshop called “Project Manager for a Day” to give participants hands-on experience with product management challenges at Google, specifically focusing on how to develop trustworthy products and policies while balancing various stakeholder needs and technical constraints.


## Overall Tone:


The tone was educational and collaborative throughout, beginning formally with structured presentations but becoming increasingly interactive and engaged during the breakout sessions. Participants showed genuine enthusiasm for tackling real-world problems, and the facilitators maintained an encouraging, supportive atmosphere while acknowledging the complexity of the challenges being discussed. The session ended on a positive note with appreciation for the collaborative dialogue between different sectors.


Speakers

– **Will Carter** – AI policy expert with extensive experience in shaping government policies and regulations on AI; currently works on leading AI policy in the knowledge and information team at Google, where he leads engagement on AI policy and regulatory standards with senior policy makers around the world; previously worked at the Center for Strategic and International Studies focusing on international technology policy issues


– **Jim Prendergast** – Works with the Galway Strategy Group; serves as moderator for the session


– **Nadja Blagojevic** – Knowledge and information trust manager at Google with over 15 years of experience in the tech industry; expert in online safety and digital literacy; based in London; has held various leadership positions at Google including leading work across Europe on family safety and content responsibility


– **Hassan Al-Mahmid** – From Kuwait, works at the Communication and Information Technology Regulatory Authority (CITRA); in charge of the .kw domain space; responsible for domain name registrations and policy making for Kuwait’s country code top-level domain


– **Audience** – Multiple audience members participated in discussions and breakout sessions


**Additional speakers:**


– **Nidhi** – Joining from India; academic doing PhD work that lies between tech and public policy in various areas of ethics


– **Abdar** – From India; works as an internet governance intern at National Internet Exchange of India, working between tech and policy


– **Oliver** – Appears to be event staff managing time and logistics (mentioned as giving time signals from the back of the room)


Full session report

# Workshop Report: “How to Develop Trustworthy Products and Policies”


## Executive Summary


This report summarizes the “Project Manager for a Day” workshop session held during IGF, titled “How to Develop Trustworthy Products and Policies.” The one-hour interactive session (9-10 AM on day zero) was designed as an educational experience led by Google representatives to give participants hands-on insight into product management challenges, particularly focusing on developing trustworthy products and policies in the digital age.


The workshop engaged both in-person and online participants in collaborative problem-solving exercises, resulting in three concrete product proposals addressing news credibility, government process automation, and information quality. The session successfully demonstrated the complexities of product development while providing practical experience in collaborative problem-solving.


## Session Structure and Participants


### Facilitators and Speakers


The session was moderated by **Jim Prendergast** from the Galway Strategy Group. The primary speakers were **Nadja Blagojevic**, Google’s Knowledge and Information Trust Manager based in London (joining remotely), and **Will Carter**, an AI policy expert from Google.


Key participants included **Hassan Al-Mahmid** from Kuwait’s Communication and Information Technology Regulatory Authority (CITRA), **Nidhi**, a PhD researcher from India working on tech and public policy ethics, and **Abdar**, an internet governance intern at the National Internet Exchange of India.


### Workshop Format


The session followed a structured approach:


1. Introductions and product management fundamentals


2. Case studies of Google’s AI-powered features


3. Collaborative breakout sessions (15-20 minutes)


4. Final presentations (2-3 minutes each)


Technical challenges with remote participation were noted, with some audio difficulties for online participants.


## Product Management Fundamentals


Nadja Blagojevic explained that product managers at Google are responsible for identifying problems to solve, developing vision and strategy, creating roadmaps, and coordinating with cross-functional teams. She emphasized the collaborative nature of product development, noting that product managers work closely with UX designers and engineers throughout the development process.


The iterative design process was highlighted as crucial, with products validated at different fidelity levels throughout development. Blagojevic noted that seemingly minor changes in language and design can significantly impact product adoption.


She distinguished between obvious improvements and less obvious innovations that solve problems users don’t realize they have, using Google Street View as an example of addressing a latent need for location visualization.


## Case Studies: Google’s AI Features


### AI Overviews


Nadja presented AI Overviews as an example of how Google approaches trustworthy AI implementation. This feature uses generative AI to provide comprehensive responses to complex search queries, appearing only when they add value beyond regular search results. The feature is designed to show only information supported by high-quality results and includes safeguards against hallucination.


### About This Image


Will Carter presented “About This Image,” a tool designed to help users understand the context and credibility of images online, including detection of AI-generated content. The tool provides contextual information about image sources and authenticity.


Central to this tool is SynthID, Google’s digital watermarking technology that embeds detectable markers in AI-generated images. These watermarks remain identifiable even after alterations such as cropping or resizing. Carter noted that all images created with Google’s consumer AI tools are marked with SynthID.


## Breakout Session Outcomes


### In-Person Groups: News Credibility Solutions


The physical room was divided into two groups that focused on news credibility and information quality challenges. Their proposals included:


1. **Visual credibility indicators**: Adding flags to Google search results to indicate whether news articles are false or AI-generated


2. **News classification system**: Rating content on a spectrum from neutral to sensationalist to help users make informed decisions


The groups recognized that implementing such systems would require collaboration with cultural competency experts and appropriate legal frameworks to understand news sources across different contexts.


### Online Group: Government Process Automation


Hassan Al-Mahmid led the online group in developing a proposal for improving Kuwait’s .kw domain registration process through AI automation. Currently, the process requires manual document verification and takes 48 hours to complete. The proposed solution would use AI image recognition to validate trade licenses and match domain names to business names, potentially reducing processing time to minutes.


The system would also suggest alternative domain names when conflicts arise and could integrate with other government entities to streamline verification processes. Al-Mahmid acknowledged that implementation would require consultation with legal departments regarding confidential data handling and determining acceptable documentation standards.


The project timeline was estimated at six months, though government integration requirements might extend this timeframe.


## Key Themes and Approaches


### User Empowerment Through Transparency


Participants agreed that providing context to users represents an effective approach to information quality, rather than making unilateral content decisions. This philosophy emphasizes user empowerment through transparency, allowing individuals to make informed decisions based on comprehensive information about sources and credibility indicators.


### AI as Enhancement Tool


There was consensus on the role of AI as a tool for verification and enhancement rather than replacement of human judgment. AI was positioned as augmenting human decision-making capabilities rather than supplanting human oversight entirely.


### Multi-Stakeholder Collaboration


All speakers recognized that addressing information quality challenges requires collaboration between the public sector, private sector, academia, and civil society.


## Practical Outcomes


### Concrete Proposals


The session generated three specific product proposals:


1. **News Article Credibility System**: Visual indicators and classification systems for search results to inform users about news article reliability


2. **AI-Powered Domain Registration**: Automated system for validating commercial entity documentation in government processes


3. **Contextual Information Tools**: Systems that provide users with background information to make informed decisions about content credibility


### Commitments


Hassan Al-Mahmid agreed to present Kuwait’s domain registration AI automation project as a detailed case study. Will Carter committed to remaining available throughout IGF week for follow-up questions and discussions.


## Challenges Identified


The discussion highlighted several ongoing challenges:


– **Cultural competency**: Developing information quality systems that work across different political and cultural environments


– **Implementation complexity**: Balancing innovation with regulatory compliance, particularly in government contexts


– **Success measurement**: Establishing metrics for evaluating information quality initiatives


– **Automation oversight**: Determining appropriate balance between automated systems and human oversight


## Conclusion


The workshop successfully demonstrated the complexity of developing trustworthy products and policies while providing participants with practical experience in collaborative problem-solving. The session revealed common ground around user empowerment through transparency, multi-stakeholder collaboration, and AI as a verification enhancement tool.


The three concrete proposals developed during the workshop provide starting points for addressing information quality challenges, while the collaborative approach modeled during the session offers a framework for future multi-stakeholder engagement in digital governance challenges.


Session transcript

Jim Prendergast: Patients, as we kick off the IGF 2020, it’s always a challenge with day zero, 9 a.m., for everybody to find the room, find their way around the venue, get through security, and as you see, get rid of some of the tech gremlins that we have sometimes. My name’s Jim Prendergast. I’m with the Galway Strategy Group. I’m gonna sort of moderate this session for you. Officially, it’s titled How to Develop Trustworthy Products and Policies. But the folks at Google sort of have an internal nickname for it. It’s called Project Manager for a Day. So what we essentially wanna do is give you an overview of what it’s like to be a product manager at Google. How do you balance all the different challenges when it comes to launching a product into the marketplace? All the different factors that these folks have to take into consideration before you actually see a product and some of the different feedback cycles that it goes through and some of the challenges that, frankly, you face on a day-to-day basis. What I’m gonna do is I’m gonna introduce our two speakers. We have one speaker here in person and then one speaker online. And then they’re gonna give a quick overview, some case studies to sort of show you what they deal with on a regular basis. And they’ll discuss some of the different considerations that do go into the product development. And then next what we’ll do is we’re gonna do essentially two breakout groups. One will be the in-person participation, folks here in the room. Will’s gonna work with you through some tabletop exercises for about 20, 25 minutes. And then Nadja’s gonna, fingers crossed, work with the online participants to accomplish the same. From a technical standpoint, I think the easiest way to not hear the people talking to each other online is for all of us to take our headsets off. That seems to be the shortest way to solve that tech issue with the online and the offline participants. during remote participation, which of course is an important aspect of the IGF. So let me get going here and do some introductions. First, we have Will Carter. Will’s an AI policy expert with extensive experience shaping government policies and regulations on AI, working with product teams to develop and deploy AI responsibly in real world applications. Currently works on leading AI policy in the knowledge and information team at Google, where he leads engagement on AI policy and regulatory standards with senior policy makers around the world. He’s advised senior leadership and C-suite executives on AI policy strategy and implementation, and developed and implemented AI policies and governance across the company. Prior to joining Google, he was with the Center for Strategic and International Studies, where he focused his research on international technology policy issues, including emerging technologies and artificial intelligence. So if you’ve got a question about AI, this is your guy. Joining us remotely is Nadja Blagojevic. She’s based in London. She is a knowledge and information trust manager at Google with over 15 years of experience in the tech industry. She’s an expert in online safety and digital literacy, and she’s held various leadership positions at Google, including leading work across Europe on family safety and content responsibility. So with that, what I’m gonna do is throw it over to Nadia to kick us off with the case studies to help set the stage for us. Nadia?


Nadja Blagojevic: Great, thanks very much, Jim, and thank you very much, everyone, for being with us here this morning. So without further ado, we will jump right in. I’m very excited to be talking with you all about what product managers do at a company like Google, and as with most jobs, there’s no one right way to do it, if you ask. A hundred people, you’ll probably get a hundred different answers, but there are some common elements that we will talk about today. So you can think of a product manager as the person who’s responsible for figuring out at its core what the problem is that needs to be solved. Sometimes it’s very easy to identify what a problem is. For example, once word processors were built, it was fairly obvious that a spell checker would be an improvement. But some things can be less obvious. For example, with Google Street View, when we first launched, it wasn’t clear to the degree to which seeing a location before a drive or a trip or contemplating a move could be. This feature was a less obvious addition to an online map, and it solved a problem that most people didn’t even realize that they had. So the PM focuses on identifying that problem and then building out a vision, a strategy, and a roadmap. The vision should really be informed by the problem that you’re trying to solve. It should be a stable, long-term, high-level overview of what that problem is and really how you’re going to tackle it. The strategy helps you navigate and leverage the technology and the ecosystem factors that will be playing out over the lifetime of your product. Your strategy should be relatively stable, and your roadmap is really thinking about how you sequence what you’re going to do to build your specific feature and move towards your vision. Your roadmap usually changes pretty frequently. In consumer tech, if you build a roadmap and it’s accurate for a year, you’re very lucky. PMs partner really closely to coordinate teams and deliver the right features, right data, users, sales, marketing at all the right times in the product development lifecycle. And we really try to make sure that we are also the ultimate champions of our products, both inside the company and externally. And the goal is really to… to make sure that we’re building something of value so that our broader teams and stakeholders can evangelize what we build as well. As product managers, we work really closely with our colleagues in user experience, which is sometimes abbreviated as UX, to iteratively design and validate what we’re building at progressively higher levels of fidelity. It’s very expensive to change something that’s fully developed, but it’s very inexpensive to put a wireframe or a rough sketch of a product in front of someone that we want to use the product and ask questions like, would you use this? What will you use it for? What doesn’t make sense? What’s missing? It can be really amazing, but these small changes in language and wording and also insights can lead to huge impacts in adoption. And lastly, but certainly not least, our engineer counterparts. Engineers build and maintain products. They make them work reliably and quickly for users. And both UX and Eng are included when we do our roadmapping and strategy setting. We build better plans of roadmaps when we have all three functions working together from the get-go to sort of build out that roadmap and set the strategy and vision. So as Jim mentioned, we’ll go through a couple of quick case studies to give you a sense of how we approach product development, walking through a couple of features that we’ve developed here at Google. So talking now about AI overviews. Not yet. Could I just interrupt real quick? To the guys in the back, can we display the slides in the Zoom and on the screen? Is that possible? There we go. Great, thanks very much. And if we could just advance to the next slide, please. We’ll just go right into our AI overviews case study. Great. So building on our years of innovation and leadership and search, AI overviews are part of Google’s approach to provide helpful responses to queries from people around the world. They use generative AI to provide key information about a topic or a question. And they were really designed to show up on queries where they can add additional benefit beyond what people might already be getting from search, where we have high confidence in the overall quality of the responses. So for example, if you look on the query to the right of the screen, you can see that AI overviews let you ask more complex questions. This query is asking for help on how to stand out on a first time apartment application. And you can see you get a really nuanced answer. You get corroborating links here and additional resources to dive in and learn more. And you get that kind of information and extra help in a very digestible way. You can see here the user experience elements and the design with the bullet points, for example, or the placement of the links in this response. And on the next slide, talking a little bit about that sort of bar of high quality. For AI overviews, we’ve designed it to only show information that’s supported by high quality results from across the web, meaning that generally AI overviews don’t hallucinate in the ways that other LLM experiences might. We think this is especially, this is important kind of across the board, but also especially important for queries that might be particularly sensitive for a given reason. And for these kinds of queries, whether they’re about something maybe health-related or finance-related or seeking certain types of advice, we have an even higher quality bar for showing information from reliable sources. We also have built into the product that for these queries, AI overviews will inform people when it’s important to seek out expert advice or to verify the information that’s being presented. And then finally here, we also have a set of links and a display panel here on the right-hand side with more additional resources for relevant web pages right within the text of the AI overviews. And we’ve seen really positive results showing these links to supporting pages directly within AI overviews is driving higher traffic to publisher sites. And because of AI overviews, we’re seeing that people are asking longer questions, diving more deeply into complex subjects, and uncovering new perspectives, which means more opportunities for people to discover content from publishers, from businesses, and from creators. I’ll hand over now to Will to talk about About This Image.


Will Carter: Thanks, Nadia, and thank you all for coming today. I’m going to talk a little bit about another feature that we launched in 2023 called About This Image. Google Search has built-in tools that really are designed to help users find high-quality information, but also to make sense of the information that they’re interacting with online. And About This Image and SynthID are designed to help users understand the context and the credibility of images they’re interacting with online, including understanding if those tools have been generated or if those images have been generated by Google’s AI tools. So with Google Image Search results, you can click on the three dots above the image, and that will show you the image’s history, which includes. Jim Prendergast, Mevan Babakar, Jim Prendergast, Jim Prendergast, Jim Prendergast, Jim Prendergast, other sites that accurately describe the original context and origin of the image. And it allows you to really understand the evidence and perspectives across a variety of sources related to the image. And finally it allows you to see the image’s metadata. So increasingly, publishers, content creators, and others are adding metadata, tags that provide additional information and context about an image that can provide a variety of information including whether or not it’s been generated, enhanced, or manipulated by AI. Which is increasingly important to understand as powerful image generation and image alteration engines are widely available. So one of the key ways that we do this is using a tool called SynthID. Which is a tool for watermarking and identifying AI generated content. Basically what this does is it embeds a digital watermark directly into the pixels of an image generated by Google’s AI image generation tools. That’s important because even when the image has been altered, for example by cropping it or screenshotting it, or resizing or recoloring or flipping the image, those watermarks can still be detected, making it more robust to adversarial behavior. And all images made with Google’s consumer AI tools are marked with SynthID. And that means that if you encounter an image through Google search, that is generated by a Google AI tool, you will be able to see that in About This Image. So this last GIF here shows how we’ve recently integrated About This Image into one of our other products, Circle to Search. So Circle to Search allows you to select something on the screen and access additional information about it. In this case, you can circle an image and get About This Image information to get context about images that you interact with online, which can be a really powerful way, again, to really understand that context and make sure that the image that you’re interacting with is being used in the way that was intended with appropriate context and accurately. So I’ll pass back to Jim for our activity.


Jim Prendergast: Yeah, sure. So thanks, Will. So, you know, sort of just give you a high level of all the different things that product managers have to consider working with their teams, the privacy rights, some of the metadata you talked about with the image. So what we’re gonna do now, I realize it’s early, hopefully you’ve all had your coffee and are ready to be a little interactive, is we’re gonna break out into two, maybe three breakout groups. I’d figure two in the physical room and one in the online room, just based upon how many folks we have. And what we’re gonna do is we’re gonna ask you to think a little bit for about 15 minutes or so, come up with some ideas. There’ll be some instructions on the next slide that Will’s gonna walk you through. And then what we’ll do is we’ll come back and share some ideas and thoughts for the final 15 minutes or so. So Will, why don’t you show them what they’re working with?


Will Carter: All right. So basically we’re going to have you break out into groups and nominate one PM. That’s going to be the person who’s kind of leading and presenting on behalf of your group. You pick an area of focus and we have a couple of options for you, but you’re welcome to pick something else. if you prefer, but info quality, news and privacy are some of the areas that we are actively working on every day. So the idea is, come up with an idea. Come up with a feature that you think we could add to Google search to address one of these issues. Or make up your own product. Then you’ll pitch your ideas to your VPs, that’s us, and argue for resources based on what you need in order to make this real. What you think the return on investment that you could generate from this product. And that doesn’t necessarily just mean how do you make money from it, but also how do you add value for the user, address a specific problem that our users are encountering in the way that they engage with our products. And don’t forget about the various things that you’re going to need to make this a reality. So that’s that UXR and support that Nadia was talking about earlier. But also, what is your go to market strategy? What are your success metrics? What is a realistic timeline or roadmap? You’ll have about 15 or 20 minutes to do this activity and we’ll be, Nadia and I will be engaging with your groups to help you work through this exercise. So good luck and maybe, what do you think? We can, yep. Okay, maybe we can divide right about here. So in the red, right there. You to this side, everyone else to that side. We can have our two groups in the room.


Jim Prendergast: All right, and Will’s gonna come down and prime the creative engine for everybody. And then Nadia’s got the online folks as well. So we’ll come back in 15 minutes and share experiences. And I know there was a question that we had in the chat room and we’ll answer that when we come back from the breakout as well. Thanks.


Nadja Blagojevic: Great, and so for everyone online, could you please try coming off mute and saying good morning?


Hassan Al-Mahmid: Thank you. Hello and good morning, everyone. Hello. Basically, we are in Norway right now, but we arrived early in the morning, we couldn’t attend the session.


Nadja Blagojevic: Ah, I see.


Hassan Al-Mahmid: And then we attend afternoon sessions in person. We’re from Kuwait, we’re from the Communication and Information Technology Regulatory Authority, etc. My name is Hassan Al-Mahmid, and I’m in charge of the cctlz.kw.


Nadja Blagojevic: Wonderful, it’s wonderful to have you with us. Are others able to come off of mute?


Audience: Hi Nadia, can you hear me? Yes, I can hear you. Hi, this is Nidhi, and I’m joining in from India, so hello. I am an accommodation, and I’m doing my PhD, and somewhere lies between tech and public policy, and various areas of ethics, so I’m very happy to be here. Good to see you.


Nadja Blagojevic: Wonderful, great to see you as well. All right, wonderful, it’s good to know that everyone’s able to come off mute. At this point, I’d like to ask everyone to please unmute yourself, because for the next few minutes we’ll be having a group discussion. Which I will not be leading, that will fall to you all. So as Will and Jim mentioned, for this next session in the breakout, we will be, rather you will be, brainstorming an idea as product managers. And it can be related to Google search, it can be related to another Google product, or just any technology idea that you think solves a problem. Can everyone please come off mute?


Audience: Yeah, please just confirm if you can hear me. Yes. Yeah, I’m Abdar. I’m from India. So I’m working as an internet governance intern at National Internet Exchange of India. So I work somewhere in between tech and policy.


Nadja Blagojevic: Wonderful. Yeah. And I’ll pose the question to the group. When you think about a product that you would like to build or a problem that you would like to solve, what springs to mind? And this is open to the entire group, please.


Hassan Al-Mahmid: Well, I do really have a lot of real case scenarios and like some projects undergoing right now. I can share some information with you and maybe if you guys are interested to help us develop the appropriate policies or get insights from you for the upcoming products in the .kw domain space. If you’re interested, I can pitch the idea for you guys and move with it. Or otherwise, I’m really open to work with the other team, other team members on other ideas. And then it’s all going to benefit us all on the way of how we’re going to think of building the policies and what aspects we need to consider when making strong and cohesive policies.


Nadja Blagojevic: Great. Other thoughts from the group?


Audience: I think if I heard Hassan correctly, that he has an idea and probably would like to share that with us and we can. sort of stitch that together, is that correct?


Hassan Al-Mahmid: Yes, that’s correct. I do have like some ideas from our day job, you know, I can share with you. For example, since we are in charge of the .kw domain space, we are thinking of implementing AI tools to help us make the registration process for domain names in Kuwait, the faster and easy process with the benefit of AI, we can like process the domain request almost immediately without wait for someone to look up the documents and make all the choices. So just I’ll give you a brief of how the domain space works in Kuwait. We do have two zones to register. For example, if you would like to register name.com.kw, since we have the extension .com.kw, it represents a commercial entity in Kuwait. So there are some set of requirements for that entity to register, such as having a valid trade license in Kuwait, they have to have a representative in Kuwait, someone either is going to be a Kuwaiti citizen or someone with a work permit in Kuwait. So these kinds of documentations are being like right now, manually uploaded throughout the portal. And then it has to be checked by a person to validate all the information and making sure that the domain registration request is valid. But we are thinking of implementing right now AI tools and some sort of integration between the government entities. So to make the process seamless, and we can have like the domain up and running. within minutes instead of, for example, 48 hours right now. Great. And when you think about building out this AI tool, what kind of resources do you think you would need to be able to develop it? And this is sort of a question for the group. I can give them a hint, basically. Yeah. The process is gonna be similar somehow, like the client who would like to register a domain name, they will need at the moment to upload their trade license. Okay? Once this is uploaded, we can use an image recognition tool to validate the document and make sure it’s not a fraudulent document. One of the regulations and the policies we have in Kuwait that the domain name is being registered for the commercial entity, it has to be matching the name of the entity in the commercial trade license. So we can, with that image recognition tool or text recognition tool, it can match the requested domain name, for example, with the name of the trade license. And if it finds a conflict, it shouldn’t reject the request, it should like pop up some sort of suggestions for the client to pick names from. That’s one example.


Nadja Blagojevic: And what kinds of sort of internal partnerships, which departments do you think, whether that’s UXR or engineering, would you need to work with legal departments? Who would you need to work with to be able to have the tool be able to do what you’ve just described?


Hassan Al-Mahmid: Well, we enjoy at our department, that’s just a one man show. Basically, we do set the policies. and we do have control of the technical aspects of the whole registration process. But we do seek some help from the legal department, that’s for sure, because we have to set some sort of guideline when uploading these documents, and we need to check with the legal department what kind of documents we should accept and how to handle this information, and sensitive where is it going to be, confidential data, it can be shared, what kind of level of confidentiality with these documents being uploaded, how to be handled and whether we can share them with the third parties or not. Yes, great. I mean data privacy and data security seem like they’d be very essential for the product development process. When you think about timeline, do you have an estimated time frame for how long something like this might take to develop? Usually these sort of tools are, the beauty of it, there are a lot of out-of-shelf solutions ready to be picked up and integrated. So we are expecting around six months to be honest, this is the time frame to have it done, in technical aspects, but since we are working with governmental entities here and maybe we need some governmental integration, you know how the government sometimes the time might extend to more than six months. Six months is more optimistic. I like that very much. We always encourage optimism, even though the entire repetition of government work that takes a lot of time, we always push for more. Efficiency and faster time, even though.


Nadja Blagojevic: All right. I think this is great. I think maybe we have a hand raised.


Audience: Yeah, so I had an opinion on that. Sure, go ahead. Yeah, so basically what Hasan is saying is, what I’m understanding is right. He’s saying there needs to be a capacity building, making the public servants familiar with this and integrating this AI into their framework. Right? Is that right, what I’m understanding?


Hassan Al-Mahmid: Yeah, that’s correct. Yes.


Audience: You’ll have to train the public servants on how to use these tools. Basically, there needs to be a capacity building.


Hassan Al-Mahmid: Yeah, there has to be some sort of training on how to use these tools. Yeah, that’s absolutely correct.


Nadja Blagojevic: Hey, everybody. Does anyone on the call have ideas about what we should ask our vice presidents for in terms of resources to develop this kind of capacity building?


Audience: We should tell them to be patient. I agree with that. The process takes time and you’ll have to be patient. Hasan, if you are looking into some global case studies, then you can look into Argentina. They also have some similar program to this.


Hassan Al-Mahmid: Thank you for the insight. We have a couple of success stories in the region. mostly in the United Arab Emirates. They do have implemented some AI tools and I believe also Qatar also they have that sort of tools. We are in talks with them at the moment to benefit from their experience. Since we are like the GCC countries in the Middle East, the Gulf countries, we almost share the same policies and we have also the same structure for domain names. So it’s much easier to get experience from these countries who are more advanced and they’re being very helpful but definitely we are looking to Argentina and we have also looked into Australia also. They have a really great content for domain names, very beneficial.


Nadja Blagojevic: I think we’ll be rejoining the group in about two minutes and so when we go back into the main group, Hassan, would you like to present as the product manager?


Hassan Al-Mahmid: Yeah, definitely, but I would also would love to. Hassan is our representative.


Nadja Blagojevic: Any final thoughts from anyone else on the call or questions or points that we think should be made as Hassan pitches this idea?


Audience: You should communicate what you’re doing to the public because since it’s a public sector, you’ll have to… communicate with them, even the failures as well. So, you know, to build trust.


Nadja Blagojevic: All right, Akhtar, do you have suggestions of how to do that?


Audience: No, as you’re doing, you can just give out small press briefings and something like that, even on your website.


Hassan Al-Mahmid: Yeah, definitely. We do usually have some press releases and briefs sometimes whenever we enable new features in .kw namespace. For example, last year in September when we released the roadmap for registering second-level domain names, that means your name .kw direct without .com or .org and I think it’s just going to be your name and .kw. We have released the roadmap on how you’re going to register these domain names and what are the places it’s going to be released on. Basically, yeah, we do regular press releases whenever we have new features. And this is one of the best ways to communicate with the public aside from social media.


Audience: Because they’re the ultimate users, so you’ll also need their interaction and their feedback. So if there’s no interaction, we’ll not get proper feedback.


Hassan Al-Mahmid: Yeah, and one thing that came to my mind, we are in the process of releasing a dispute resolution policy for domain names in Kuwait and it’s a national dispute resolution policy. When we released that policy, we seeked public consultation. We have the brief on the website. and we gave participants around 60 days to participate and give their idea on what are the policies and what has to be changed or improved. And we have received really good feedback from the public.


Audience: That’s really nice to hear. And 60 days is a good time frame.


Hassan Al-Mahmid: Yes, and this is the approach we’re doing in Sitra, Kuwait. Sitra, Kuwait basically is the TRA, Regulatory Authority for Information and Communication. So right now, whenever we release a new policy, we push it to the public consultation to get feedback. And then we analyze, get the feedback and improve. And then we release the final version.


Audience: Good to hear.


Nadja Blagojevic: Great, so it sounds like we will be rejoining the main group in just a second. And so Hassan will be our representative presenting the product idea. And we’ll also hear from the other two groups that have been workshopping their product ideas in person at IGF.


Audience: Hassan, make us proud.


Jim Prendergast: I hate to break up the creative process, especially at this hour since it’s going. But we do need to come back because they are going to throw us out at 10 o’clock. I promised all of you.


Audience: We’re only like 10 minutes away from the forum, by the way.


Hassan Al-Mahmid: Well, it’s now raining. And then after this session, yeah, we will join you guys on the floor, inshallah.


Jim Prendergast: Okay. Hello, everybody.


Nadja Blagojevic: Great chance to meet you up in person.


Jim Prendergast: Can you all hear us in the online world?


Audience: I am not from India, so I’m not lucky.


Jim Prendergast: Okay. Nadia, can you hear us from where you are? Yes. Okay, great. Well, I was listening actually to all three groups and I was impressed that the creative juices got flowing at this hour in particular with all the jet lag and everything else. So congratulations to everybody who partook. Will, you want to share some insights before we ask? Actually, let me ask you the question while the other groups get organized and prepared to read out to us. So the question that did come in after, before the break was, how do you scrape high quality content and what are the parameters of what you call high quality? And while Will is answering that, each group spokesperson get ready to give us like a two to three minute readout from your deliberations. Thanks.


Will Carter: I wish there was a simple answer to this question. This is something that we struggle with every day and that remains an area of significant innovation and investment for Google. There are a few approaches that we are taking currently and like I said, they just continue to evolve all the time as we try to figure out how to do this better and better. One way is to work with fact-checking organizations around the world that can validate information for us and do additional research and those partnerships are really key. Another way is to identify news sources that are consistently providing high quality information that are independent and that are generally reliable and validated by fact-checkers. And, but really at the end of the day, I think the most important thing that we do is provide context to our users as much as we can about where the information that they’re interacting with came from. So that’s providing additional links, providing counter arguments, providing access to metadata and additional information because there is no one


Jim Prendergast: first, and then we’ll go to the online group, and then the group to the right. So did you nominate a spokesperson, or? OK, great. There should be a mobile microphone, right? I put it on the table. There you go.


Audience: Can you hear me? OK, great. You said two minutes? OK. So in our group, we discussed a feature that would be added to Google search results that include news articles. And the goal of the feature is to give users information about the validity of the news article, some kind of flag or visual signal to show them if they’re looking at something trustworthy. We specifically talked about identifying news that can be known to be false or known to be generated by AI. And we would, if we are able to determine that, add a flag to show that to users that they are looking at something that is AI-generated. And they can still view it, but it would just be kind of a visual cue. We discussed some of the ways to kind of generate this information using fact-checking organizations that are credible and based on the country or location where they’re. reviewing information. We talked a bit about some of the resources needed to do this. Of course, you need an engineering and UX team, but we also talked about kind of cultural competency and having a group or some type of experts on knowing news sources and what kind of the cultural dialogue is in different contexts and also kind of the legal and legal framework to know that. And yeah, talking about the the ROI of this feature, we talked about why a company like Google should incorporate this feature and the ROI would be increasing trust in the product, giving users insight into the information they’re looking at, which is something they’re seeking and would be a unique value that would bring them to use Google search as opposed to other search engines and generally increasing the trust in the product and making the user more able to rely on the information they’re getting would encourage usage and expected roadmap. We didn’t really get that far, but this is the idea we came up with.


Nadja Blagojevic: Great. No, that’s it. You covered a lot of territory in a short period of time, especially with a cold start, so appreciate that. So I’m not sure who was nominated to represent the online participants, but we will unmute you if you try and talk or Nadia, do you recall who was your spokesperson? Yes, that would be Hassan. Hassan, are you able to come off mute?


Hassan Al-Mahmid: Hello and good morning, everyone.


Jim Prendergast: Good morning.


Hassan Al-Mahmid: My name is Hassan Al-Mahmoud. I’m from Sitra, Kuwait, which is basically the TRA for the country. I represent the .kw domain space in Kuwait. I’m in charge for the domain name registrations and the policy making. And with my colleagues in the online session, we have discussed a feature that would be added for .kw domain name registrations. For example, the current process right now, we do have two zones. We have a restricted zone for registration and unrestricted zone. What we mean by restricted, the third level domain names such as .com.kw, yourname.com.kw, it represents a commercial entity. So in order to, for example, to register a domain under .com.kw, you will have to fulfill some requirements, which are, you have to be a commercial, an official commercial entity in Kuwait with a valid trade license and has to be registered by someone who’s actually in Kuwait, based in Kuwait, either a Kuwaiti citizen or someone with a work permit. So the process right now is semi-manual, we would say, because whoever need to register a domain name, they need to upload some sort of documents like the trade license, their civil ID, for example. And these are being checked manually by one of the employees of the .kw domain space. And then we can grant that domain name registration. But we are looking into some other solutions that might make the process much faster, much easier. We are thinking of implementing AI tools to do this sort of scrubbing and checks. Because one condition, if you’d like to register a .com.kw domain name, the domain you select, it has to be matching with your trade license. or your trademark license. So, instead of doing that manually checking, we can have some sort of scrubbing that will check the name of the license or the name of the trademark and then it will process the request almost immediately. And in case of, for example, whoever is in jurisdiction.com.kw is selecting a name that doesn’t match the trademark or the license, we can, using the AI tool, would give them suggestions what are the appropriate domain names that can be registered.


Jim Prendergast: Great. Thanks, Hasan. We are short on time. I’m getting the clock ticking down sign from Oliver in the back. So, real briefly to our folks in the room on the right.


Audience: Yeah, I’ll be very brief, seeing as that we’re building on the product that was mentioned earlier, but on a public news classification. So, to what Will was saying about creating an informed audience, we would want to build on the, right now when you go on Google Search, you have three dots when you come up with a news article that provides you some context about the news outlet. This feature isn’t right now in the news aggregator tab when you go to Google News. So, we’d like to build on that to have a classification where based on a little spectrum of either being neutral contact or sensationalist content, we would give users the information that they would need to make an informed decision on what they think is credible and trust. That’s really hard to define internally and externally. Just again, building on the other team, we would work with UX and engineering, but also leveraging subject matter expertise at Google, especially with the Google News Initiative team and also just Google News, to ensure that they’re helping us build a framework that can then be taken to product. And in terms of ROI, well, of course, we want to drive user engagement and by providing additional context and other links that within the Google ecosystem, they’re able to continue staying on the platform. continuing to engage with the content that Google would provide. But also, at the end of the day, it’s about providing more context and building information quality online. Again, subject to their own understanding of what they are being the users, but also what quality looks like in different political contexts. So yeah, I think we’re all interested in news credibility.


Jim Prendergast: Yeah, no, that is definitely a common theme. And this being the beginning of the IGF, I’m sure that’s a theme that will carry on for the next several days. Well, I’m impressed. I mean, some really good ideas, some really good thoughts.


Will Carter: Definitely.


Jim Prendergast: Do you want to react? And maybe between you and Nadia, it closes up in the next 90 seconds or so?


Will Carter: Sure, I’ll keep it brief and then kick it over to Nadia. I think that there’s a reason that these issues are top of mind. These are things that I think we’re all struggling with on a day-to-day basis, whether it’s companies like Google that are trying to solve these problems or users on the web that are trying to understand all this information that’s inundating us every day and how to make sense of it and how to understand what is and isn’t credible. You guys have come up with some really great ideas. And I think this gives you a sense of how, when you think of a problem that you interact with every day, how do you actually start to translate that into a product division, identify your needs, turn it into something that can actually work and solve that problem day-to-day? This is what we do at Google. This is exactly what our workday looks like. So I’m really excited to have you all participate in this process. Nadia?


Nadja Blagojevic: Yes, just fully agree with Will. It is wonderful to be with you and hear everyone’s ideas. And these are all topics that we care very deeply about internally at Google. And we’re very grateful for the opportunity to be here and be in dialogue with you all. , to hear your points of view, to learn from you, and to share what we’re doing, not only in terms of how we think about product development and design, and how we’ve approached some of these issues within our own suite of products, but also, you know, to sort of share and be in exchange when it comes to, you know, our philosophies, and, you know, ultimately these topics will need robust collaboration between public, private sector, academia, civil society. So thank you very much for being with us right from the very beginning of day zero, and very much hope you enjoy the rest of your IGF.


Jim Prendergast: Great. Thanks, Nadia. And speaking of collaboration, I’m getting the hook from Oliver in the back of the room. So thanks, everybody, for participating both online and in person. Joel will be here for the rest of the week. So if you have any questions, track him down. That’s how these IGFs work if you’ve never been. So thanks, everybody, and have a great meeting. Bye-bye.


Will Carter:


N

Nadja Blagojevic

Speech speed

150 words per minute

Speech length

1781 words

Speech time

709 seconds

Product managers identify problems to solve, build vision/strategy/roadmap, and coordinate teams to deliver features

Explanation

Product managers are responsible for figuring out what problems need to be solved, which can range from obvious improvements like spell checkers to less obvious features like Google Street View. They focus on building a stable long-term vision, strategy to navigate technology factors, and roadmaps that sequence feature development.


Evidence

Examples provided include spell checker as an obvious improvement to word processors, and Google Street View as a less obvious feature that solved problems people didn’t realize they had


Major discussion point

Product Management at Google


Topics

Digital business models


Product managers work closely with UX teams to iteratively design and validate products at different fidelity levels

Explanation

Product managers collaborate with user experience teams to design and validate products progressively, starting with wireframes and rough sketches before full development. This approach is cost-effective since it’s expensive to change fully developed products but inexpensive to test early concepts with users.


Evidence

Mentioned that small changes in language, wording, and insights from early testing can lead to huge impacts in adoption


Major discussion point

Product Management at Google


Topics

Digital business models


Agreed with

– Jim Prendergast
– Audience

Agreed on

Product development requires cross-functional collaboration and user-centered design


Product managers collaborate with engineers who build and maintain products, with all three functions working together from the beginning

Explanation

Engineers are responsible for building and maintaining products to work reliably and quickly for users. Both UX and engineering teams are included in roadmapping and strategy setting from the start, as better plans emerge when all three functions collaborate from the beginning.


Major discussion point

Product Management at Google


Topics

Digital business models


AI overviews use generative AI to provide key information and show up on queries where they add benefit beyond regular search results

Explanation

AI overviews are part of Google’s approach to provide helpful responses using generative AI, designed to appear on queries where they can add additional benefit beyond standard search results. They allow users to ask more complex questions and receive nuanced answers with corroborating links.


Evidence

Example provided of a query asking ‘how to stand out on a first time apartment application’ which receives a nuanced answer with bullet points, links, and additional resources


Major discussion point

AI-Powered Search Features and Quality


Topics

Digital business models | Interdisciplinary approaches


AI overviews are designed to only show information supported by high-quality results and don’t hallucinate like other LLM experiences

Explanation

AI overviews have a high quality bar and only display information supported by high-quality web results, which prevents hallucination issues common in other large language model experiences. For sensitive queries about health, finance, or advice, there’s an even higher quality standard and the system informs users when expert advice should be sought.


Evidence

Mentioned that AI overviews inform people when it’s important to seek expert advice or verify information, and show links to supporting pages that drive higher traffic to publisher sites


Major discussion point

AI-Powered Search Features and Quality


Topics

Content policy | Consumer protection


Building information quality requires robust collaboration between public sector, private sector, academia, and civil society

Explanation

Addressing information quality challenges cannot be solved by any single entity alone but requires collaborative efforts across different sectors. This multi-stakeholder approach is essential for developing effective solutions to information credibility issues.


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Interdisciplinary approaches


Agreed with

– Will Carter
– Audience

Agreed on

Information quality requires collaborative approaches and providing context to users


J

Jim Prendergast

Speech speed

181 words per minute

Speech length

1209 words

Speech time

399 seconds

Product development involves balancing multiple challenges and considerations before launching products into the marketplace

Explanation

Product managers at Google must balance numerous different challenges and factors when launching products, including privacy rights, metadata considerations, and various feedback cycles. The session aims to show participants what it’s like to be a product manager dealing with these day-to-day challenges.


Evidence

Mentioned privacy rights and metadata considerations as examples of factors that must be balanced


Major discussion point

Product Management at Google


Topics

Digital business models | Privacy and data protection


Agreed with

– Nadja Blagojevic
– Audience

Agreed on

Product development requires cross-functional collaboration and user-centered design


W

Will Carter

Speech speed

167 words per minute

Speech length

1148 words

Speech time

412 seconds

There is no simple answer to identifying high-quality content – it requires partnerships with fact-checking organizations and identifying reliable news sources

Explanation

Identifying high-quality content is a complex challenge that Google struggles with daily and continues to invest in solving. The approach involves working with fact-checking organizations worldwide for validation and identifying news sources that consistently provide reliable, independent information.


Evidence

Mentioned partnerships with fact-checking organizations and identifying consistently reliable and independent news sources validated by fact-checkers


Major discussion point

AI-Powered Search Features and Quality


Topics

Content policy | Freedom of the press


Disagreed with

– Audience

Disagreed on

Approach to defining and identifying high-quality content


The most important approach is providing context to users about where information came from through additional links and metadata

Explanation

Rather than trying to be the sole arbiter of information quality, Google focuses on giving users as much context as possible about information sources. This includes providing additional links, counter arguments, and access to metadata so users can make informed decisions.


Evidence

Mentioned providing additional links, counter arguments, and access to metadata as ways to give users context


Major discussion point

AI-Powered Search Features and Quality


Topics

Content policy | Freedom of expression


Agreed with

– Nadja Blagojevic
– Audience

Agreed on

Information quality requires collaborative approaches and providing context to users


Disagreed with

– Audience

Disagreed on

Approach to defining and identifying high-quality content


About This Image helps users understand context and credibility of images online, including if they were generated by AI tools

Explanation

About This Image is a feature launched in 2023 that helps users understand the context and credibility of images they encounter online. Users can click on three dots above an image to see its history, other sites that describe its original context, and metadata that may indicate if it was AI-generated.


Evidence

Feature shows image history, sites describing original context and origin, and metadata tags that can indicate if images were generated, enhanced, or manipulated by AI


Major discussion point

Image Verification and AI-Generated Content Detection


Topics

Content policy | Digital identities


SynthID embeds digital watermarks in AI-generated images that remain detectable even after alterations like cropping or resizing

Explanation

SynthID is a watermarking tool that embeds digital watermarks directly into the pixels of images generated by Google’s AI tools. These watermarks are robust and can still be detected even when images are altered through cropping, screenshotting, resizing, recoloring, or flipping.


Evidence

Watermarks remain detectable after cropping, screenshotting, resizing, recoloring, or flipping, making them robust against adversarial behavior


Major discussion point

Image Verification and AI-Generated Content Detection


Topics

Content policy | Digital identities | Intellectual property rights


Agreed with

– Hassan Al-Mahmid

Agreed on

AI tools can significantly improve efficiency in content verification and processing


All images made with Google’s consumer AI tools are marked with SynthID for identification in search results

Explanation

Google has implemented a comprehensive approach where every image generated by their consumer AI tools receives a SynthID watermark. This means users can identify AI-generated images from Google tools when they encounter them through Google search using the About This Image feature.


Evidence

Integration with Circle to Search feature allows users to circle an image and get About This Image information for context


Major discussion point

Image Verification and AI-Generated Content Detection


Topics

Content policy | Digital identities | Consumer protection


H

Hassan Al-Mahmid

Speech speed

136 words per minute

Speech length

1716 words

Speech time

753 seconds

Current .kw domain registration requires manual document verification which takes 48 hours, but AI tools could process requests immediately

Explanation

The current domain registration process in Kuwait requires manual verification of documents like trade licenses and civil IDs, taking up to 48 hours for approval. By implementing AI tools and integrating with government entities, the process could be completed within minutes instead of the current lengthy timeframe.


Evidence

Current process requires manual checking of uploaded documents by employees, while proposed AI integration could make domains ‘up and running within minutes instead of 48 hours’


Major discussion point

Domain Registration Process Improvement


Topics

Capacity development | Digital access | Alternative dispute resolution


AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise

Explanation

The proposed AI system would use image and text recognition to validate uploaded trade licenses and ensure domain names match the business names on official documents. When conflicts are found, instead of rejecting requests, the system would provide suggested alternative domain names that comply with regulations.


Evidence

Example given of validating that requested domain name matches the name on trade license, and providing suggestions when conflicts are found rather than outright rejection


Major discussion point

Domain Registration Process Improvement


Topics

Digital business models | Alternative dispute resolution | Intellectual property rights


Agreed with

– Will Carter

Agreed on

AI tools can significantly improve efficiency in content verification and processing


Implementation would require legal department consultation for handling confidential data and determining acceptable documents

Explanation

The AI tool implementation requires collaboration with legal departments to establish guidelines for document handling, determine acceptable document types, and address data privacy concerns. Legal consultation is essential for determining confidentiality levels and whether documents can be shared with third parties.


Evidence

Need to check with legal department about what documents to accept, how to handle sensitive/confidential data, and whether information can be shared with third parties


Major discussion point

Domain Registration Process Improvement


Topics

Privacy and data protection | Data governance | Legal and regulatory


The project timeline is optimistically six months but may extend longer due to government integration requirements

Explanation

While the technical implementation using off-the-shelf AI solutions could be completed in six months, the involvement of governmental entities and required integrations may extend the timeline significantly. The six-month estimate represents an optimistic scenario for the technical aspects alone.


Evidence

Mentioned that ‘there are a lot of out-of-shelf solutions ready to be picked up and integrated’ but ‘since we are working with governmental entities… the time might extend to more than six months’


Major discussion point

Domain Registration Process Improvement


Topics

Capacity development | Digital business models


A

Audience

Speech speed

155 words per minute

Speech length

984 words

Speech time

379 seconds

Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated

Explanation

The proposed feature would provide users with visual signals or flags in Google search results to indicate the validity of news articles, specifically identifying content known to be false or generated by AI. Users could still view the content but would receive visual cues about its nature and credibility.


Evidence

Feature would use fact-checking organizations that are credible and based on country/location for validation


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Freedom of the press | Consumer protection


Agreed with

– Nadja Blagojevic
– Will Carter

Agreed on

Information quality requires collaborative approaches and providing context to users


Disagreed with

– Will Carter

Disagreed on

Approach to defining and identifying high-quality content


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts

Explanation

Implementing news credibility features requires more than just technical resources – it needs cultural competency experts who understand news sources and cultural dialogue in different contexts, as well as appropriate legal frameworks. This recognizes that news credibility varies across different cultural and legal environments.


Evidence

Mentioned need for ‘cultural competency and having a group or some type of experts on knowing news sources and what kind of the cultural dialogue is in different contexts’


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Cultural diversity | Legal and regulatory


Agreed with

– Nadja Blagojevic
– Jim Prendergast

Agreed on

Product development requires cross-functional collaboration and user-centered design


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions

Explanation

The proposed system would classify news content on a spectrum ranging from neutral to sensationalist, building on existing Google features that provide context about news outlets. This classification would help users make informed decisions about content credibility while acknowledging that trust is difficult to define both internally and externally.


Evidence

Would build on existing three-dot feature in Google Search that provides context about news outlets, extending it to Google News aggregator tab


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Freedom of the press | Consumer protection


Disagreed with

– Will Carter

Disagreed on

Approach to defining and identifying high-quality content


Agreements

Agreement points

Information quality requires collaborative approaches and providing context to users

Speakers

– Nadja Blagojevic
– Will Carter
– Audience

Arguments

Building information quality requires robust collaboration between public sector, private sector, academia, and civil society


The most important approach is providing context to users about where information came from through additional links and metadata


Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated


Summary

All speakers agreed that addressing information quality challenges requires multi-stakeholder collaboration and providing users with contextual information rather than making unilateral content decisions. This includes partnerships with fact-checking organizations and giving users tools to make informed decisions.


Topics

Content policy | Interdisciplinary approaches | Freedom of expression


AI tools can significantly improve efficiency in content verification and processing

Speakers

– Will Carter
– Hassan Al-Mahmid

Arguments

SynthID embeds digital watermarks in AI-generated images that remain detectable even after alterations like cropping or resizing


AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise


Summary

Both speakers demonstrated how AI tools can automate and improve verification processes – Carter with image authenticity verification through SynthID, and Al-Mahmid with document verification for domain registration. Both emphasized AI’s ability to process and validate content more efficiently than manual methods.


Topics

Digital business models | Content policy | Digital identities


Product development requires cross-functional collaboration and user-centered design

Speakers

– Nadja Blagojevic
– Jim Prendergast
– Audience

Arguments

Product managers work closely with UX teams to iteratively design and validate products at different fidelity levels


Product development involves balancing multiple challenges and considerations before launching products into the marketplace


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts


Summary

All speakers recognized that successful product development requires collaboration across multiple disciplines including UX, engineering, legal, and cultural expertise. They emphasized the importance of iterative design, user validation, and considering diverse stakeholder needs.


Topics

Digital business models | Cultural diversity | Legal and regulatory


Similar viewpoints

Both emphasized the importance of providing users with contextual information and classification systems to help them evaluate content credibility, whether for images or news articles. They shared the philosophy of empowering users with information rather than making decisions for them.

Speakers

– Will Carter
– Audience

Arguments

About This Image helps users understand context and credibility of images online, including if they were generated by AI tools


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Topics

Content policy | Consumer protection | Freedom of expression


Both recognized that technical solutions must be accompanied by appropriate legal frameworks and expertise. They understood that implementing AI-powered systems requires careful consideration of legal, cultural, and regulatory contexts.

Speakers

– Hassan Al-Mahmid
– Audience

Arguments

Implementation would require legal department consultation for handling confidential data and determining acceptable documents


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts


Topics

Legal and regulatory | Privacy and data protection | Cultural diversity


Unexpected consensus

Transparency and user empowerment over content control

Speakers

– Will Carter
– Audience
– Nadja Blagojevic

Arguments

The most important approach is providing context to users about where information came from through additional links and metadata


Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated


Building information quality requires robust collaboration between public sector, private sector, academia, and civil society


Explanation

It was unexpected that both Google representatives and audience members converged on the philosophy of transparency and user empowerment rather than platform-controlled content moderation. Instead of advocating for removing or blocking questionable content, all parties favored providing users with tools and context to make their own informed decisions.


Topics

Content policy | Freedom of expression | Consumer protection


AI as a tool for verification rather than replacement of human judgment

Speakers

– Will Carter
– Hassan Al-Mahmid
– Audience

Arguments

All images made with Google’s consumer AI tools are marked with SynthID for identification in search results


AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Explanation

There was unexpected consensus that AI should augment rather than replace human decision-making. All speakers viewed AI as a tool for providing information and suggestions rather than making final determinations about content validity or user choices.


Topics

Digital business models | Content policy | Consumer protection


Overall assessment

Summary

The discussion revealed strong consensus around user empowerment through transparency, multi-stakeholder collaboration for information quality, and AI as a verification tool rather than decision-maker. Speakers agreed on the importance of cross-functional product development and providing contextual information to users.


Consensus level

High level of consensus with significant implications for content policy and platform governance. The agreement suggests a shift toward transparency-based approaches rather than top-down content control, emphasizing user agency and collaborative solutions to information quality challenges.


Differences

Different viewpoints

Approach to defining and identifying high-quality content

Speakers

– Will Carter
– Audience

Arguments

There is no simple answer to identifying high-quality content – it requires partnerships with fact-checking organizations and identifying reliable news sources


The most important approach is providing context to users about where information came from through additional links and metadata


Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Summary

Will Carter emphasizes providing context and partnerships with fact-checkers rather than making definitive quality judgments, while audience members propose more direct classification systems with visual flags and spectrum-based ratings to guide users


Topics

Content policy | Freedom of the press | Consumer protection


Unexpected differences

Overall assessment

Summary

The main area of disagreement centers on content quality assessment approaches – whether to provide context for user decision-making versus implementing direct classification systems


Disagreement level

Low to moderate disagreement with significant implications for content policy approaches. The disagreement reflects fundamental tensions between platform neutrality and active content curation, which has broader implications for how information quality challenges should be addressed in search and news platforms


Partial agreements

Partial agreements

Similar viewpoints

Both emphasized the importance of providing users with contextual information and classification systems to help them evaluate content credibility, whether for images or news articles. They shared the philosophy of empowering users with information rather than making decisions for them.

Speakers

– Will Carter
– Audience

Arguments

About This Image helps users understand context and credibility of images online, including if they were generated by AI tools


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Topics

Content policy | Consumer protection | Freedom of expression


Both recognized that technical solutions must be accompanied by appropriate legal frameworks and expertise. They understood that implementing AI-powered systems requires careful consideration of legal, cultural, and regulatory contexts.

Speakers

– Hassan Al-Mahmid
– Audience

Arguments

Implementation would require legal department consultation for handling confidential data and determining acceptable documents


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts


Topics

Legal and regulatory | Privacy and data protection | Cultural diversity


Takeaways

Key takeaways

Product management at Google involves identifying problems, building vision/strategy/roadmap, and coordinating cross-functional teams including UX and engineering from the beginning


High-quality content identification has no simple solution and requires partnerships with fact-checking organizations, identifying reliable sources, and most importantly providing context to users through metadata and additional links


AI-powered features like AI overviews and About This Image are designed to help users understand information credibility and context, with built-in safeguards against hallucination


SynthID watermarking technology allows detection of AI-generated images even after alterations, with all Google AI-generated images being marked


Government domain registration processes can be significantly improved through AI automation, reducing processing time from 48 hours to minutes


News credibility solutions require cultural competency, legal frameworks, and classification systems to help users make informed decisions about information quality


Building trustworthy information systems requires robust collaboration between public sector, private sector, academia, and civil society


Resolutions and action items

Hassan Al-Mahmid will present Kuwait’s .kw domain registration AI automation project as a case study, with optimistic 6-month timeline for implementation


Participants developed three concrete product proposals: news article credibility flags, AI-powered domain registration automation, and news classification spectrum system


Will Carter committed to being available throughout the IGF week for follow-up questions and discussions


Unresolved issues

No definitive solution provided for identifying high-quality content – remains an ongoing challenge requiring continuous innovation


Cultural competency and legal framework requirements for news credibility systems were identified but not fully addressed


Timeline uncertainties for government integration projects due to bureaucratic processes


How to balance automated AI decision-making with human oversight in sensitive areas like domain registration and news credibility


Specific metrics for measuring success of information quality initiatives were not established


Suggested compromises

Providing context and metadata to users rather than making definitive quality judgments about information


Using visual flags and classification systems that inform users rather than censoring content


Implementing AI automation while maintaining human oversight for sensitive decisions


Seeking public consultation periods (like Kuwait’s 60-day feedback process) when implementing new policies


Leveraging existing partnerships with fact-checking organizations rather than building internal validation systems from scratch


Thought provoking comments

There is no one right way to do it, if you ask a hundred people, you’ll probably get a hundred different answers, but there are some common elements… Sometimes it’s very easy to identify what a problem is. For example, once word processors were built, it was fairly obvious that a spell checker would be an improvement. But some things can be less obvious. For example, with Google Street View, when we first launched, it wasn’t clear to the degree to which seeing a location before a drive or a trip or contemplating a move could be… This feature was a less obvious addition to an online map, and it solved a problem that most people didn’t even realize that they had.

Speaker

Nadja Blagojevic


Reason

This comment is insightful because it introduces the fundamental challenge of product management – identifying problems that users don’t even know they have. It demonstrates the difference between obvious improvements and innovative solutions that create new value propositions.


Impact

This comment set the conceptual foundation for the entire discussion by establishing that product management involves both solving known problems and discovering latent needs. It primed participants to think beyond obvious solutions in their breakout exercises.


I wish there was a simple answer to this question. This is something that we struggle with every day and that remains an area of significant innovation and investment for Google… but really at the end of the day, I think the most important thing that we do is provide context to our users as much as we can about where the information that they’re interacting with came from.

Speaker

Will Carter


Reason

This comment is thought-provoking because it acknowledges the complexity and ongoing challenges in content quality assessment, while pivoting to transparency as a practical solution. It shows intellectual honesty about limitations while offering a constructive approach.


Impact

This response validated the difficulty of the problem participants were grappling with and shifted the focus from perfect solutions to transparency-based approaches. It influenced all three breakout groups to incorporate context and transparency elements in their proposed solutions.


You should communicate what you’re doing to the public because since it’s a public sector, you’ll have to communicate with them, even the failures as well. So, you know, to build trust… Because they’re the ultimate users, so you’ll also need their interaction and their feedback. So if there’s no interaction, we’ll not get proper feedback.

Speaker

Audience member (Akhtar)


Reason

This comment is insightful because it introduces the critical dimension of public accountability and transparency in government technology projects. It emphasizes that trust-building requires communicating both successes and failures, which is often overlooked in product development discussions.


Impact

This comment elevated the discussion from technical implementation to governance and public trust considerations. It prompted Hassan to elaborate on Kuwait’s public consultation processes and demonstrated how different sectors (public vs. private) have different stakeholder accountability requirements.


We are thinking of implementing AI tools to help us make the registration process for domain names in Kuwait, the faster and easy process… So these kinds of documentations are being like right now, manually uploaded throughout the portal. And then it has to be checked by a person to validate all the information… But we are thinking of implementing right now AI tools and some sort of integration between the government entities.

Speaker

Hassan Al-Mahmid


Reason

This comment is thought-provoking because it presents a real-world case study of AI implementation in government services, highlighting the practical challenges of balancing automation with regulatory compliance and fraud prevention.


Impact

This concrete example grounded the theoretical discussion in practical reality and shifted the online breakout group’s focus to a specific, implementable solution. It demonstrated how product management principles apply across different sectors and regulatory environments.


There’s a reason that these issues are top of mind. These are things that I think we’re all struggling with on a day-to-day basis, whether it’s companies like Google that are trying to solve these problems or users on the web that are trying to understand all this information that’s inundating us every day and how to make sense of it.

Speaker

Will Carter


Reason

This comment is insightful because it acknowledges the universal nature of information quality challenges, creating common ground between tech companies and users. It validates that these aren’t just corporate problems but societal challenges affecting everyone.


Impact

This comment provided validation for the participants’ concerns and created a sense of shared purpose. It reinforced that the breakout exercise wasn’t just theoretical but addressed real problems that affect all stakeholders in the information ecosystem.


Overall assessment

These key comments shaped the discussion by establishing a framework that moved from theoretical product management concepts to practical, real-world applications with societal implications. Nadja’s opening comment about solving unknown problems set an innovative mindset, while Will’s honest acknowledgment of ongoing challenges with content quality created space for nuanced solutions rather than perfect answers. The audience contributions, particularly around public accountability and the Kuwait domain registration case study, grounded the discussion in practical governance considerations and demonstrated how product management principles apply across sectors. The convergence on information credibility and transparency across all breakout groups shows how these foundational comments successfully oriented participants toward addressing fundamental trust and quality challenges in digital products. The discussion evolved from a product management tutorial into a collaborative exploration of how technology can serve public trust and information integrity.


Follow-up questions

How do you scrape high quality content and what are the parameters of what you call high quality?

Speaker

Audience member (via chat)


Explanation

This is a fundamental question about Google’s content quality assessment methods that was asked but only partially answered, indicating need for more detailed exploration of quality parameters and scraping methodologies


What kind of resources would be needed to develop AI tools for document validation and domain registration processes?

Speaker

Nadja Blagojevic


Explanation

This question was posed to help Hassan think through the practical requirements for implementing AI in government processes, but requires further detailed analysis of technical, legal, and human resources


What kinds of internal partnerships and departments would be needed for AI tool development in government settings?

Speaker

Nadja Blagojevic


Explanation

This explores the organizational structure and collaboration requirements for implementing AI in public sector, which needs more comprehensive mapping of stakeholder involvement


How to effectively communicate AI implementation progress and failures to the public in government projects?

Speaker

Audience member (Akhtar)


Explanation

This addresses the critical need for transparency and trust-building in public sector AI implementations, requiring development of communication strategies and frameworks


What are effective methods for public consultation on new technology policies?

Speaker

Hassan Al-Mahmid (implicitly through discussion of 60-day consultation periods)


Explanation

While Hassan shared their approach, this raises broader questions about best practices for engaging public input on technology policy development across different contexts


How to define and implement cultural competency in news credibility assessment across different contexts?

Speaker

First breakout group


Explanation

The group identified the need for cultural expertise in determining news credibility, but this requires deeper research into how cultural context affects information assessment


How to create effective classification systems for news content (neutral vs sensationalist) across different political contexts?

Speaker

Third breakout group


Explanation

This group proposed a news classification system but acknowledged the challenge of defining quality across different political contexts, requiring further research into objective classification methodologies


What are the best practices for capacity building and training public servants on AI tools?

Speaker

Audience member discussing Hassan’s project


Explanation

This was identified as a critical need for Hassan’s project but requires systematic research into effective training methodologies for government AI adoption


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #30 High Level Review of AI Governance Including the Discussion

Open Forum #30 High Level Review of AI Governance Including the Discussion

Session at a glance

Summary

This discussion focused on the current state and future directions of global AI governance, featuring perspectives from government officials, international organizations, and private sector representatives. The panel was moderated by Yoichi Iida, former Assistant Prime Minister of Japan’s Ministry of Internal Affairs, who outlined the evolution of AI governance from early initiatives in 2016 through recent developments including the OECD AI principles, the Hiroshima AI process, and the UN Global Digital Compact.


Lucia Russo from the OECD emphasized three strategic pillars: moving from principles to practice, providing evidence-based policy guidance, and promoting inclusive international cooperation. She highlighted the merger of the Global Partnership on AI with the OECD, expanding membership to 44 countries including six non-OECD members. Abhishek Singh from India’s Ministry of Electronics stressed the importance of democratizing AI access, particularly for the Global South, advocating for equitable access to compute resources, inclusive datasets, and capacity building initiatives.


Juha Heikkila from the European Commission clarified that the EU AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly policies. Melinda Claybaugh from Meta emphasized the need to connect existing frameworks to avoid fragmentation and duplication, calling for a shift from principle development to practical implementation.


Ansgar Koene from EY highlighted the growing need for robust governance frameworks as organizations move AI from experimental to mission-critical applications. All participants agreed on the importance of moving from principles to practice, building capacity globally, and ensuring inclusive participation in AI governance discussions. The conversation concluded with recognition that while AI and internet governance share some similarities, AI governance faces unique challenges requiring specialized approaches tailored to diverse use cases and risk profiles.


Keypoints

## Major Discussion Points:


– **Evolution and Current State of Global AI Governance**: The discussion traced the development of international AI governance from early initiatives in 2016 through major frameworks like OECD AI Principles (2019), the EU AI Act (2023), and the Hiroshima AI Process, highlighting how governance has evolved to address new challenges posed by generative AI technologies.


– **Moving from Principles to Practice**: A central theme emphasized by multiple speakers was the critical need to translate established AI governance principles into concrete, actionable policies and implementation frameworks, including developing toolkits, assessment mechanisms, and practical guidance for organizations and governments.


– **Inclusivity and Global South Participation**: Significant focus on ensuring equitable access to AI technologies, compute resources, and decision-making processes for developing countries and the Global South, with emphasis on capacity building, democratizing AI access, and preventing concentration of AI power in a few companies and countries.


– **Interoperability and Avoiding Fragmentation**: Discussion of the challenge of coordinating multiple international AI governance frameworks while avoiding regulatory fragmentation, with emphasis on finding common ground, connecting existing initiatives, and streamlining efforts to prevent duplication.


– **Multi-stakeholder Collaboration and Implementation**: Examination of roles and responsibilities of different stakeholders (governments, international organizations, private companies, civil society) in implementing AI governance, with focus on transparency, accountability, and collaborative approaches to address global AI challenges.


## Overall Purpose:


The discussion aimed to assess the current landscape of global AI governance and chart a path forward for international cooperation. The panel sought to evaluate existing frameworks, identify priorities for different stakeholders, and explore how to effectively implement AI governance principles while ensuring inclusivity and avoiding regulatory fragmentation.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared commitment to responsible AI development. Speakers demonstrated alignment on core principles while acknowledging different approaches and challenges. The tone was professional and forward-looking, with participants building on each other’s points rather than expressing disagreement. There was a sense of urgency about moving from theoretical frameworks to practical implementation, but this was expressed through cooperative problem-solving rather than criticism of current efforts.


Speakers

**Speakers from the provided list:**


– **Yoichi Iida** – Former Assistant Prime Minister of the Japanese Ministry of Internal Affairs, Chair of the OECD Digital Policy Committee


– **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology


– **Lucia Russo** – OECD Economist at AI and Digital Emerging Technologies Division


– **Ansgar Koene** – Global AI Ethics and Regulatory Leader from E&Y Global Public Policy


– **Melinda Claybaugh** – Director of Privacy and AI Policy from META


– **Juha Heikkila** – Advisor for International Aspects of Artificial Intelligence from European Commission


– **Audience** – Unidentified audience member who asked a question


**Additional speakers:**


– **Shinichiro Terada** – From the University of Takyushu, Japan (audience member who asked a question about AI governance compared to Internet governance)


Full session report

# Global AI Governance Discussion: From Principles to Practice


## Introduction and Context


This discussion examined the current state and future directions of global artificial intelligence governance, bringing together perspectives from government officials, international organisations, and private sector representatives. The panel was moderated by Yoichi Iida, former Assistant Prime Minister of Japan’s Ministry of Internal Affairs and current Chair of the OECD Digital Policy Committee.


The conversation focused on assessing existing international cooperation mechanisms, identifying priorities for different stakeholders, and exploring pathways for translating established principles into practical implementation while ensuring global inclusivity.


## Current State of AI Governance Frameworks


### OECD’s Evolution and Approach


Lucia Russo from the OECD outlined the organisation’s strategic evolution from establishing foundational principles in 2019 to providing comprehensive policy guidance. She emphasised three strategic pillars: moving from principles to practice, providing evidence-based policy guidance through initiatives such as the AI Policy Observatory, and promoting inclusive international cooperation.


A significant development has been the merger of the Global Partnership on AI with the OECD, expanding membership to 44 countries, including six non-OECD members (India, Serbia, Senegal, Brazil, Singapore, and one other). The OECD is developing a toolkit to help countries implement AI principles, though specific details about its format and functionality were not elaborated.


### EU AI Act and Regional Implementation


Juha Heikkila from the European Commission clarified that the EU AI Act regulates specific uses of AI rather than the technology itself, employing a risk-based approach. He explained that “about 80% according to our estimate, maybe even 85% of AI systems…would be unaffected” by the legislation, addressing misconceptions about its scope.


The EU’s engagement extends beyond its own regulatory framework to include participation in G7, G20, Global Partnership on AI, and various international summits, aiming to support global coordination while maintaining compatibility with EU objectives.


### Hiroshima AI Process Progress


The discussion highlighted progress in the Hiroshima AI process, with Lucia noting that 20 companies submitted reports to the OECD website on April 22nd, demonstrating industry engagement with the code of conduct and guiding principles agreed by G7 nations.


## Key Stakeholder Priorities


### Industry Perspective: Moving Beyond Principles


Melinda Claybaugh, Director of Privacy and AI Policy from Meta, stressed the importance of shifting focus from establishing additional principles to translating existing frameworks into actionable measures. She proposed three specific areas for continued work:


– Continuing to build policy toolkits


– Creating libraries of resources including evaluations and benchmarks


– Continuing the global scientific conversation


Ansgar Koene from EY emphasised the need for reliable, repeatable assessment methods for AI systems, highlighting the importance of standards development and transparency in evaluation methods.


### Government Priorities: Capacity and Implementation


Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Information Technology, emphasised that operational implementation requires enhanced regulatory capacity for testing AI solutions and practical translation of agreed principles into concrete actions. He highlighted India’s efforts to make compute accessible at very low cost, noting that “high-end H100s, H200s are made available at a cost less than a dollar per GPU per hour.”


## Major Challenge: Democratising AI Access


### Global South Participation and Resource Access


Abhishek Singh articulated the challenge of ensuring that Global South countries become genuine stakeholders in AI decision-making processes rather than passive recipients of frameworks developed elsewhere. He emphasised the need for:


– Access to high-end compute resources


– More inclusive datasets that represent diverse global contexts


– A global repository of AI solutions, similar to digital public infrastructure models


Singh noted the current concentration of AI power “in a few companies within a few countries” and called for more democratic participation in AI governance and development.


### Infrastructure and Capacity Building


The discussion revealed significant challenges in ensuring equitable access to technical infrastructure necessary for AI development. Singh proposed creating a global depository of AI solutions that could enable more equitable AI development across different countries and contexts, addressing issues like deepfakes and misinformation that particularly affect developing nations.


## International Cooperation and Coordination


### Managing Framework Proliferation


Participants acknowledged both the benefits and challenges of multiple AI governance initiatives. While demonstrating international cooperation, there are concerns about potential fragmentation. Juha Heikkila noted that despite apparent multiplication of efforts, there are consistent elements such as risk-based approaches across different frameworks.


Melinda Claybaugh emphasised the risk of fragmentation for companies developing global technologies, highlighting the need for approaches that respect different national contexts while maintaining sufficient consistency for global deployment.


### Role of International Organisations


The conversation highlighted the important role of international organisations in facilitating coordination. Participants discussed emerging initiatives such as the UN Scientific Panel on AI, with Juha noting it as “quite a crucial component,” and mentioned two UN resolutions, “one led by US and one led by China.”


## AI Governance versus Internet Governance


An audience question from Shinichiro Terada from the University of Takyushu prompted discussion about differences between AI and Internet governance. Juha Heikkila explained that AI governance differs fundamentally because AI extends beyond Internet applications to include embedded systems, robotics, and autonomous vehicles, requiring different approaches tailored to AI-specific characteristics.


Despite these differences, Abhishek Singh suggested that AI governance should adopt multi-stakeholder principles from Internet governance while recognising that AI requires enhanced global partnership due to the concentration of control in fewer corporations.


## Future Directions and Commitments


### Immediate Next Steps


Several concrete commitments emerged from the discussion:


– India will host an AI Impact Summit in February, focusing on operationalising inclusive AI governance principles


– Continued development of the OECD toolkit for implementing AI principles


– Ongoing Hiroshima AI process reporting with industry participation


– Building libraries of evaluation resources and benchmarks for AI assessment


### Long-term Strategic Directions


The discussion pointed towards creating shared resources that could support more equitable AI development globally, including the proposed global repository of AI solutions. There was emphasis on building capacity building networks as outlined in Global Digital Compact implementation.


## Conclusion


The discussion revealed strong consensus on the urgent need to move from principle establishment to practical implementation of AI governance frameworks. While significant progress has been made in establishing international cooperation mechanisms, major challenges remain in ensuring equitable access to AI technologies and meaningful participation by developing countries.


Key areas requiring continued attention include addressing resource inequities, building regulatory capacity globally, and coordinating multiple governance frameworks to prevent fragmentation while respecting different national approaches. The path forward requires sustained commitment from all stakeholders and innovative approaches to resource sharing and capacity building that go beyond traditional models of international cooperation.


Session transcript

Yoichi Iida: The Capital of Philosophy Hi, Bishak. How are you? Good morning everybody! And good morning, good afternoon, good evening, probably, depending on the place where you are, to online participants. My name is Yoichi Iida, the former Assistant Prime Minister of the Japanese Ministry of Internal Affairs, and also the Chair of the OECD Digital Policy Committee. Thank you very much for joining us. Today we are discussing the current situation and also some foresight on global AI governance. We have very excellent speakers on my left side. So, let me introduce briefly my speakers before they take the floor and make their own self-introduction. So, from my end, first Dr. Ansuga Kone, the Global AI Ethics and Regulatory Leader from E&I Global Public Policy. Next to him, Mr. Abhishek Singh, the Under-Secretary from the Indian Ministry of Electronics and Information Technology. Thank you very much, Abhishek. Next to him, we have Lucia Russo, from OECD Economist at AI and Digital Emerging Technologies Division. So next to her, we have Ms. Melinda Kuebo, Director of Privacy and AI Policy from META. Thank you very much for joining us. And last but not least, we have Dr. Juha Heikkila, Advisor for International Aspects of Artificial Intelligence from European Commission. Thank you very much for joining us. So, AI governance. As all of you know, we are seeing rapid changes in technologies, but also the policy formulation. Japanese government started the international discussion on AI governance as early as year 2016, when we made a proposal on an international discussion on AI governance at G7 and also OECD. So, this proposal led to the agreement on first international and intergovernmental principle as OECD AI principles in the year 2019, and also the G7 discussion led to the launch of global partnership on AI GPA in the year 2020. Also, UNESCO started the discussion on ethical AI recommendations, and the European Commission started the discussion on AI governance framework, which led to the enactment of AI Act in the year 2023. After these years, we saw a rapid change in AI technology in particular near the end of 2022 rapid rise of CHAP-GPT and we saw a lot of new types of risks and the challenges brought by new AI technology that was the background why we started the discussion at G7 on Hiroshima process we wanted to respond to the new risks and the challenges brought by generative AI and near the end of the year G7 agreed on code of conduct and guiding principles of Hiroshima AI process and this effort led to the launch of reporting framework for code of conduct of Hiroshima process in the year 2024 and this year we saw 20 reports by AI companies publicized on OECD website on the 22nd of April. In the meantime UN also started the discussion on AI governance and we saw the agreement on UN resolutions on related to AI two resolutions one led by US and one led by China. UN also started the discussion on global digital compact which concluded in September 2024 and we are now in the process of GDC follow-up and also in the beginning of discussion on WSIS plus 20 review. So this is the rapid and the short history of AI governance Thank you very much for this wonderful discussion and over the last probably several years and against this background I would like to discuss with these excellent speakers on what are the priorities and the emphasis in these discussions for different stakeholders in AI ecosystem and what are their perspectives. So, let me begin with Lucia from OECD. So, what do you think your priorities and emphasis are in promoting international or global AI governance and what international initiatives and frameworks do you consider very significant at present and also for the future discussion for countries, for international organizations and other stakeholders? What is your view?


Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoichi mentioned, we have started working at the OECD together with countries like Japan and multi-stakeholder groups on international AI governance back in 2019 and we have continued that work throughout the years to move from these principles that were adopted by countries into policy guidance on how to put them into practice. And the role of the OECD has been since then to be a convener of countries and multi-stakeholder groups and to provide policy guide and analytical work to support this. evidence-based and understanding of the risks and opportunities of artificial intelligence. So I think in terms of the role for the OECD there are three main strategic pillars so this moving from principles to practice and that is undertaken through several initiatives that range from a broad expert community that is supporting our work and to providing metrics for policymakers and this is through our OECD or AI policy observatory that provides trends and data but also a database of national AI policies that allows countries to see what others are doing and also learn from experiences across the globe and third to promote inclusive international cooperation and in that regard a key major milestone was achieved last July 2024 when the global partnership on AI and OECD merged and joined forces to promote safe secure and trustworthy AI that would again broaden the geographic scope beyond OECD members. We have now 44 members of the global partnership on AI and these include six countries that are not OECD members including India, Serbia, Senegal, Brazil and Singapore and so the idea is that that broader geographic scope will also increase as we proceed and so that will foster even more effective inclusive conversations with these , and we have a lot of opportunities for these countries. And in terms of priorities that we see, of course, it was mentioned the Hiroshima AI process, and that is an initiative that we see as very prominent, because it allows having a common standardised framework for these principles that were adopted by the Japanese government, but more than that is also the transparency element that is very important, because it’s not only about committing to these principles, it’s also demonstrating that companies are acting upon these principles and sharing in a transparent way which concrete actions they are taking to put them into practice. And this is really important, because it’s not only for companies, it’s also for companies themselves to have a learning experience, and, again, both for countries, and for companies themselves that can share these initiatives and learn what they are doing in practice to promote the different principles that we see in the framework. So, these are the areas where the OECD will continue working. Evidence, inclusive, multi-stakeholder co-operation, and guidance on policies.


Yoichi Iida: Okay. Thank you very much. Actually, OECD AI principles agreed in 2019 paved a robust foundation for national and international AI governance. So, I think that was very much supportive, and also we learned quite a lot from these principles. Japan enacted a new AI law only last month, but there are a lot of reflections from AI principles of OECD into our own AI law. So thank you very much. So I would like to invite two speakers from governmental bodies. So now I turn to Abhishek. Thank you very much for joining us. From the government perspective, what do you think your priorities and emphasis in developing AI governance, and also what do you evaluate the current situation?


Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI governance and how we can work together with the global community, especially with the work which is done at the OECD and in various forums, whether it’s the UN high-level advisory body on AI, the G7 Hiroshima process, the G20 initiatives at Brazil and now South Africa. So like the whole world together, we are trying to address a common issue with regard to how we can leverage the power of this technology, how we can use it for larger social good, how we can use it for enabling access to services, how it can lead to enabling, to empowering people at the last mile. So that has been the principle mantra of what we have been doing in India. We have a large country and we do believe that AI can be a kinetic enabler for empowering people and enabling access to education, healthcare, the remotest corners of the country in various languages and enabling a voice interface for empowering people. To do this, we need to have a balanced, pro-innovation, inclusive approach towards development of the technology. We need to ensure that access to AI, compute, the data sets, algorithms and other tools for building safe and trusted AI is Good morning, everyone, and welcome to this session of the Global South, where we’re going to be talking about how to make AI more equitable. Currently, the state of the technology is such that the real power of AI is concentrated in a few companies in a few countries. If you have to democratise this, if you have to kind of ensure that the country, the Global South, become a stakeholder in the conversations around, we need to have this principle ingrained in all the countries around the world. This principle is well ingrained in all the countries that we chaired and following last year in Serbia and coming in Slovakia, this principle is well ingrained. The OECD Inclusive Framework also that we came up for GP 2.0, it also defines that we need to become much more inclusive, we need to bring countries of Global South at the decision-making tables, and towards this, the initiatives at Global Digital Compact also define how do we actually make it happen, how do we ensure that a researcher in a remote corner in a low- and medium-income country has access to similar compute that a researcher in the Silicon Valley has. We need to create frameworks. The AI Action Summit that France co-chaired along with India, there was a concept of current AI that came in which required commitments, financial commitments, to build up an institutional framework for funding such initiatives, for adopting AI-based technology, so that is something that we need to continue, and as we move from the French Summit to the India Summit that we’ll be hosting next year in February, we’ll need to work with the entire AI community to institutionalize this. How do we ensure that, in India we are making compute accessible at a very low cost, like the high-end H100s, H200s are made available at a cost less than a dollar per GPU per hour. Can we build up a similar framework so that researchers in low- and medium-income countries also get access to something similar? Can we build up a data-sharing protocol, a protocol in which when models are trained, the data sets are much more inclusive, the data sets from context-sharing… . We have a model in the DPI ecosystem, there is a global depository of DPI solutions. Can we build up a global depository of AI solutions which can be accessible to more countries? That’s something that we need to work on when we are working at global governance frameworks. And there are tools. How do we do privacy enhancing? How do we do anonymization of data? How do we ensure that we are able to prevent the damage that deepfakes can cause? How do we build up a global repository of AI solutions which can be accessible to more countries? How do we build up a global repository of AI solutions which can be accessible to more countries? How do we build up a global depository of AI solutions which can be accessible to more countries? How do we ensure that we are able to prevent the damage that deepfakes can cause? How do we democracies across the world are facing this challenge of misinformation on social media? And AI sometimes becomes an enabler for that. Can we develop tools for watermarking AI content, can we develop global frameworks so that social media companies become part of this whole ecosystem, so we can prevent the risks that democracies have? And how do we build up a global repository of AI solutions which can be accessible to more countries? And how do we ensure that, including building capacities across the world, we will be able to build up an AI ecosystem that will be more fair, more balanced, more equitable? So we are working with the global community towards this, and I hope that this discussion will further contribute to creating such enabling frameworks.


Yoichi Iida: Thank you very much for the very comprehensive remark. I believe the ultimate objective of governance is to make use of this data. I’m the director of the Global Governance Framework and I’m here today to talk about AI as a technology, as much as possible, but also without concern. So this is a point we need to share and also the common objective of building up the Global Governance Framework. Having said this, Yuhua, people say, you know, AI act may be a little bit too strict and bringing the excessive regulation. I don’t know, what is your opinion and what is the priorities or requirement of EU?


Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand that the AI Act does not regulate the technology in itself, it regulates certain uses of AI. So we have a risk-based approach and it only intervenes where it’s necessary. So there are these statements that it regulates AI, it doesn’t actually, it regulates certain uses of AI which are considered to be either too harmful or dangerous or too risky so there need to be some safeguards in place. So in fact it’s innovation friendly because about 80% according to our estimate, maybe even 85% of AI systems that we see around would be unaffected by it. And it applies equally to everyone placing AI systems on the EU market, whether they are European, Asian, American, you name it. So in that sense it creates a sort of level playing field and it prevents fragmentation. So we have uniform rules in the European Union, we don’t have a patchwork of rules. It’s not as if we wouldn’t have regulation without the AI Act because the member states of the European Union would have proceeded to regulate. But the regulation is just one aspect of our activities and it’s a common misconception that we only do regulation. We actually invest a lot in innovation, we’ve been doing that over the years and we’ve always done it. The third pillar, in addition to trust, regulation, excellence, innovation, research, etc. The third pillar is international engagement. We think that because some of the challenges, or many of them, related to AI are actually boundaries. They are global. We think that cooperation is both necessary and useful. So we want to be involved and we engage bilaterally and multilaterally to support the setting up of a global level playing field for trustworthy human-centric AI. And we build coalitions with those who share the objectives. We want to have AI for the good of us all. So we want to promote the responsible stewardship and democratic governance of AI. But we also do cooperation on technical aspects. So, for example, cooperation on AI safety, support to innovation and its take up in some key sectors. So we do this both bilaterally with a number of partner countries, which is increasing. But we’re also involved in all the key discussions. G7, so the Hiroshima process was already mentioned, Hiroshima Friends. G20, the Global Partnership on AI. So we are a founding member, the European Union is a founding member of the Global Partnership on AI. So we’ve been involved in that from the very, very beginning. Now, of course, in an integrated partnership with the OECD. And with the OECD, of course, we are involved in all the key working groups which relate to AI. We are a member of the Network of AI Safety Institutes. And we’ve been actively involved also in the summits, Bletchley, Seoul, Paris. And then the upcoming summit in India is, of course, also where we will be involved in. And, of course, we are also, via the member states, we are involved in. is the director of the Global Digital Compact. We are a global digital compact and we have a lot of work to do and we have a lot of work to do to promote the global digital compact and its implementation which is now in a critical phase. And basically we do this from two perspectives. On the one hand, we do it to promote our goals which I listed and then also to ensure that whatever conclusions, declarations and statements that are made in the global digital compact, we ensure that they are compatible with our strategy and compatible also with our regulation so that we don’t end up in a situation where we have international commitments which are somehow conflicting with what is our strategy in general


Ansgar Koene: and then our regulation in particular. So this is basically the rationale for our engagement and our involvement. Thank you.


Yoichi Iida: Thank you very much for the detailed explanation and we really understand, you know, the EU act is objected to pursuing the innovation-friendly environment across the EU region. And we also discussed in G7, you know, the different countries, different jurisdictions have different backgrounds, different social or economic conditions, so the approaches to AI governance have to be different from one from another, but still, that is why we need to pursue interoperability across different jurisdictions, different frameworks, and I’m personally impressed by the approach by the European Commission in the discussion on the code of practice, which is very open to all stakeholders, so the EU and it gives us a lot for our partners to discuss this case. Thank you very much. The private sector people were also very much impressed when they joined the discussion and submitted their comments, which were much reflected to the current text, and we are expecting the very good result from the discussion as a code of practice as part of the AI Act. Thank you very much. Now I turn to the other stakeholders. So Melinda, from the perspective of a big AI company, how do you evaluate the current situation of global AI governance? And also, what are the priorities or what are the requirements as a private company in the governance framework, and what do you expect?


Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving the opening remarks and listing all of the frameworks and the acronyms and all of the principles and bodies that are involved here, it’s really remarkable the work that has gone on in the last couple of years in the international community on AI governance. And there’s just been an incredible proliferation of frameworks and principles and codes and governing strategies. And I think at this moment, it’s really important to consider connecting the dots. I think we don’t want to continue down the road of duplication and proliferation and continued putting down of principles. I think we’ve largely seen a similarity and a coherence of approach around the various frameworks that have been put out at a high level. And I think it’s really important at this point to think about how do we connect these frameworks and these principles. Thank you. I’m going to talk a little bit about how we connect these principles and these frameworks. Because if we do not think about that, then we are at risk. I think it was mentioned of fragmentation. And from a private company’s perspective, the challenge of running this technology and developing and deploying this technology that is global and doesn’t have borders, as we’re all familiar with, is the risk of the fragmentation of approach. And so I think it’s really important to think about what do we have in common and how do we draw connections between these principles. Another priority is really moving from principle to practice. And I’ve been encouraged to see this as a kind of theme in conversations throughout a few days here on AI governance. We have the principles, but how do we put them into practice? And I mean that in a few different ways. Of course, from a company’s perspective, what does it mean? And I’m encouraged by the work of kind of trying to translate some of these things into concrete measures. But I think also from a country’s perspective, countries that want to implement and deploy and really roll out AI solutions to public challenges, how do they do that? What is the toolkit of measures and policies and frameworks at a domestic level that is important to have in place? Things like an energy policy, scientific infrastructure and research infrastructure, data, compute power, all of those things are really important. How do companies make sure they have, how do countries make sure they have the right elements in place to really leverage AI? And then I think, of course, from the perspective of policy institutions, how do they… . And then we also have a lot of work to do to set out toolkits and frameworks to make sure that all stakeholders have the opportunity to adopt AI. And so I think I’m also encouraged as we think about moving from principle to practice that there seems to be a broadening of the conversation. I think in terms of the focus beyond some of the early principles, I think it’s important to make sure that we’re looking at the benefits as well as minimizing the risks. And I think it’s important, I think the Hiroshima AI principles and process were really important in ensuring that we’re looking at maximizing the benefits as well as minimizing the risks. And so what does that mean? And how do we expand the conversation beyond risks to make sure it’s benefits-based? And that means including a lot of stakeholders who haven’t been part of the conversation to make sure that we’re moving from principle to practice. So how do we do that? How do we do the AI impact summit? How do we include as many stakeholders as possible in the conversation? Civil society, you know, everyone from the global south.


Yoichi Iida: How do we include that, expand that conversation, and how do we make sure we’re moving to tangible, concrete impacts? And how do we make sure that we’re avoiding fragmentation and improving the interoperability? And also, the second point, from principle into actions. This is very important, and that’s exactly what we are now pursuing. For example, I understand OECD is making the efforts, the toolkit for AI principles. And also, Hiroshima process, thank you very much for those results. And also, I think, only because we have the What the companies are doing inside the company when they assess the risks and also takes take the countermeasures and Publicize what they are doing. So all those information are on the website of OECD now and There is a lot of learnings from the practical Information but still we found those reports a little bit difficult to read up and understand so this is another challenge for for practicality, but We I believe we are making the progress. So having listened to these Answers, what is your opinion and what is what do you evaluate the current situation? Sure, thank you very much and thank you for the invitation to be on this panel So reflecting on this space around AI governance Both from how we within EY are looking at this, but also from what we are seeing


Ansgar Koene: Amongst our private sector and public sector clients whom we are helping with setting up Their AI transformation and their governance frameworks around this We are seeing that especially as more and more of these organizations are moving from exploring possible uses of AI in test cases towards actually building it into mission critical use cases where failure of the AI system will either have a significant impact directly on consumers or citizens or Have significant impacts on the ability of the organization itself to operate it is becoming very critical For organizations to have the confidence that they have a good governance framework in place, which will allow them to assess and measure and understand the reliability of the AI system the I’m going to talk a little bit about the use cases for which it truly operates, what are the boundary conditions within which it should be used and where it should not be used, the kind of information that also people within the organization and people outside need to have in order to be able to use the AI systems correctly. And so if we reflect from that point of view of the need that organizations have to have a good governance framework for the use of AI onto these global exercises and global initiatives, I think there are effectively two dimensions in which these global initiatives are important. One is the direct one, which is things like the OECD AI principles helped all organizations to have a foundation that they could reflect on as they are thinking what are the key things that we need to have in our governance thinking. The G7 code of conduct has helped to elaborate that further and has helped to pinpoint in more detail what goes into questions such as what is good transparency, what is a way to think about inclusiveness for instance of the people that need to be reflected on when developing these systems. And now the Global Digital Compact also helps to provide a broader understanding of also the way to think about AI governance within the broader context of good governance in itself. But then there’s also the indirect way from the point of view of companies, the indirect way in which these global instruments of course help to make sure that different countries have a common base from where to approach how to create either regulations or voluntary guidelines, whatever works best within their particular context. But it gives a…


Yoichi Iida: Thank you very much, exactly what you said was we need to improve interoperability and coherence across different governance frameworks and we have to admit there are differences in approaches but we need the common foundation, probably as human centricity and democratic values and including transparency or accountability or data protection or whatever. So thank you very much for the comment and so we believe our approaches and the world is proceeding in the right direction by sharing the experiences and the knowledges and try to improve coherence and interoperability. Then we have different frameworks going on, so second question, what do you think you need to do as a stakeholder, what is your role and what is your strategy in coming years and in particular what do you expect from UN Global Digital Compact which is now discussing the global AI governance. So at this time I would like to start with Abhishek. Abhishek Thakur As I mentioned our strategy for AI implementation is to ensure that we use this technology for enabling access to I am the CEO and co-founder of the Global Digital Compact. We want to make it available to all services, to all Indians, in all languages, especially through voice.


Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this a reality? We have a lot of expectations because we are catching up with the West in the evolution of this technology. How do we kind of enable access? Like the first request that we had, especially the U.S., because that’s where the whole other companies who own compute, 90% of it is controlled by one company, to ensure that we have access to at least 50,000 GPUs in India. That becomes one practical requirement that we have. Second is to build in, ensure that the models, which are again developed primarily in the West and Deep-Sea came in China, so all these models, how do they become more inclusive in the sense that how they are trained on data sets from across the world? So that becomes our third, second request. And the third, which is the most important part, is building capacities. How do we ensure that, and even Global Digital Compact document also talks about capacity building initiative, setting up a capacity building network. How do we ensure that skills and capacities in all countries are developed, enhanced, and further to be able to take up advantage of the evolving technologies? And then we also need to build safeguards, like the OECD principles are there for responsible AI, for ensuring safe, trustworthy development of AI. But to ensure that, one would need tools and even regulators, especially being in the government, when we feel that there’s a need to regulate, but then how do we enhance the regulatory capacity? Even if you want to test a particular solution, whether it meets the standards, meets the benchmarks, do you have the regulatory capacity to test that? Enhancing that, enhancing cooperation on that, will become very, very critical. So I would say that my asks with the Global Digital Compact and the UN process will be at the operational level. I’m the director of the Global South and the Global Community. The principles are largely agreed on. Everybody talks the same language at every forum. But how do we translate that talk into action? That would be the real requirement that we will have. And we are happy to work with the global community in making this a reality, not only for India,


Yoichi Iida: but for the entire Global South and the world community. OK, thank you very much. Inclusivity will be one of the key words in the coming month in global AI governance discussion. There is a lot of expectation for India’s AI Impact Summit next year. So thank you very much for the comment. And now I invite Melinda for your views. Thank you so much. So under the theme of moving from principles to practice, three ideas.


Melinda Claybaugh: One is to continuing to build policy toolkits, which I think the OECD is really well-placed to do, for countries that want to advance their AI adoption. Two, I think, is libraries of resources along the lines of evaluations and benchmarks and third-party resources of testing of AI that’s been done and really putting that in one place. There are a lot of entities engaged in this and I think building the knowledge base will be really important. And then third, I think, is really continuing the global scientific conversation. And I think on that point, this is where I lead into the global digital compact, the UN Scientific Panel on AI as an independent scientific body to continue research and conversation and making sure that we are having the best scientific voices coming together. And then the global dialogue on AI governance through UN forums. I think that is the convening power there is what’s really important in bringing the right stakeholders. is a member of the OECD, and she is going to talk to us about how OECD can help to bring these new standards to place. Okay. Thank you very much. Very important three points. So, Melinda mentioned OECD toolkit. So, now I would like to invite Lucia for your comment.


Yoichi Iida: Yes. Thank you.


Lucia Russo: Indeed, we have started this project to build a toolkit to implement the OECD principles, and it comes exactly from this demand to have more actionable resources that would guide countries on how to go from these agreed principles into concrete actions. And it was agreed by the ministerial council meeting at the OECD just at the beginning of June. And what is this toolkit going to do, and how it’s going to be built? It will be an online interactive tool that will allow users, we expect mostly government representatives to make use of these resources, by consulting and interrogating the large database that we have on national AI policies, but it will be a guided interaction that will allow countries to understand where they need to act. And that would concern both the more values-based section of the principles, but also the policy areas that include, as we have heard, issues around compute capacity, data availability, research and development resources. And it will guide countries through understanding their needs, but also what the priorities may be, and then provide suggestions that would be policy options that other countries in a similar way. . And we want to have a level of advancement or in a region that is the same as the country that is navigating this toolkit to have these suggestions on policy options and practices that have already been put in place and that have been proven effective. And so, on one hand, we want to build this user experience. On the other hand, we want to have a level of advancement or in a region that has already been put in place and that has been proven effective. And so, on the one hand, we want to enrich the repository of national policies and strategies that we already have for 72 jurisdictions on the OECD database of national strategies. And that is one of the, I think, the priorities that also we see that we need to build further upon this toolkit. And so, on the one hand, we want to have a level of advancement or in a region that has already been put in place and that has been proven effective. And so, on the other hand, we hope to have this increased cooperation on things such as this one. And the idea is to build this toolkit again through co-creation with countries, and for that, we are organizing this toolkit, and we hope to have a level of advancement or in a region that has already been put in place. And so that we better understand the needs, because, as we have heard, I think everyone is agreeing on the broader actions, but then, when it comes to practice, we better need to understand what are the challenges, and that is where we want to work with countries around these challenges. So, thank you very much. is where we want to put the focus on. We have also been advancing work on understanding AI uptake across sectors and again this is in view of moving from this conversation that is very broad into concrete applications and their understanding better what are the bottlenecks and what are the pathways to increase adoption when it comes to agriculture, when it comes to health care, when it comes to education for instance. And perhaps just to close on that point I think when it comes to the Hiroshima reporting framework it’s interesting to see that the framework doesn’t only talk about risk identification, assessment and mitigation. The last chapter also talks about how to use AI to advance human and global interests and it’s interesting to see that in this first reporting cycle by 20 companies there are initiatives that are reported on how companies are actually engaging with governments and civil societies to have projects that indeed foster AI adoption across these key sectors. So once again this will be priorities and we see this as the


Yoichi Iida: key actions moving forward. Okay thank you very much. Actually OECD principles, GPAY, Hiroshima process, all those initiatives are backed up by OECD secretariat so we look forward to working very closely in the future. So time is rather limited but first I invite Ansgar. So what is your point?


Ansgar Koene: Sure, well very much I’d like to echo the point that was made regarding the need to move from principles to practice. As well as the point around capacity building and within those well, I would like to also highlight the work that OECD is doing around the Incidence database which is really helping to get better understanding about Where are real failures within AI occurring as opposed to hypothetical ones? but also, I think it is very important for us to be supporting and Encouraging broader participation in the standards development in this space Which are often a key tool that industry uses in order to be able to understand how to actually Go towards implementation and it is a good reference point. So that industry feels yes This is an approved the wider community agrees that this is a good approach to do it however all of these things in order to really achieve their intended outcome of being able to provide end-users with a Confidence and trust in these kinds of systems will require also reliable repeatable Assessments that can be done on how these systems are being implemented how the government’s frameworks are being implemented and In order to have these we need Greater transparency as to what the particular assessments are intended to achieve and how they are doing this so that we have Expectation management so that users will understand really how to interpret what this assessment has actually tested for we need greater capacity building also within the community to build an ecosystem of Assessment and assurance providers in this space and we’ve seen some interesting work around that happening Also already in some jurisdictions such as the UK and the OECD is helping in this space as well and Effectively we we just need the community to be able to Provide clarity as Chairman of the Board of Governors of the Japanese Government What is a good governance framework, how to approach this, hence the standards, and how to assess whether it has achieved and been done in the appropriate way through things like assessments.


Yoichi Iida: Thank you very much. The engagement of all communities, including the civil society, is very, very important, and the multi-stakeholder approach is definitely essential. So we believe the role of IGF in AI governance is increasingly important. So, sorry for the time remaining, but Juha, what is the role of Europe, and how do you think Europe will be working with the world?


Juha Heikkila: So, we are of course very much involved in sort of also the discussions of the GDC, the Global Digital Compact, as I mentioned earlier, and I think to echo what Melinda said, we think that the scientific panel, the independent scientific panel, is quite a crucial component of this. I think that the text, the GDC text, is very useful. I think what was agreed last year in that regard was very successful, and we hope that that will be then translated into the implementation, the way that it was expressed, and we do that in the spirit of the text. And I think that in this regard also for the dialogue, the AI governance dialogue, we think it’s important that it doesn’t actually duplicate existing efforts, because there are quite a lot of them, and that’s why also in the GDC text it’s mentioned that it would be on the margins. Chairman of the Board of Governors of the United Nations, I think that would be very useful because I think overall there is some call for streamlining in terms of the number of events and initiatives and forums that we have in the international governance landscape in the area of AI. I think that this kind of multiplication is not necessarily sustainable in the long run. I think we have made partial steps forward in the integrated partnership that was formed between the Global Partnership on AI and the OECD. We welcome that because we had some overlap between the expert communities and also I think now that initiative has a better sense of purpose also backed by the structures of the OECD which make it more impactful from our perspective and we look forward to how that will develop further and it will also have then a role in taking these discussions to a greater audience and membership. One thing that I wanted to mention just very briefly is that despite this multiplication of efforts and the seeming almost chaotic nature if you like in some respects to exaggerate a bit, there are some sort of constants however and one of these constants is, and Melinda mentioned this as well, that they go in the same direction. One item, one aspect which has been included in many of them is the risk-based approach which I mentioned as the foundation of the AI Act but it’s also for example reflected in the Hiroshima AI process guiding principles and the code of conduct. It’s also reflected elsewhere in some of the statements that have been made in the summit. So, you know, we have some common ground, but I think it would be desirable over the long run to try and seek some convergence and streamline.


Yoichi Iida: Okay, thank you very much. So there are a lot of efforts going on, and the GDC is also one of them, and maybe U.S. is first to join it too. And the role of the UN will be very important, but we need to avoid duplication, and we need to streamline and focus our power on the most efficient way. So I hope in the development of AI governance discussions, the role of IGF will be very important, and this needs to be the place where the people get together and discuss not only Internet governance, but also AI governance, or digital technology governance, to be discussed by multistakeholders here in IGF. So thank you very much. And I wanted to take one question, but I’m not sure I’m allowed. We’ve run out of time, we’ve just got one minute. Just ask. Okay, please. Okay. Yeah, please. But maybe you need a microphone. We can hear. You can go there and ask. Oh, yeah, okay. Go there and ask. The IGF protocol. I’m sorry. Thank you very much for great discussions.


Audience: My name is Shinichiro Terada from the University of Takyushu, Japan. And I’d like to understand AI governance compared to the Internet governance. And when the Internet was spreading globally, there were various challenges such as supporting Thank you very much for this complicated question, but we want to answer it.


Juha Heikkila: Okay, you have it. So it is a very complicated question. I comment on sort of one aspect maybe and I let, of course, my fellow panelists to comment. But I think, broadly speaking, I heard this comment the day before yesterday that AI is on the Internet and therefore Internet governance, you know, is suitable for it. There is more to AI than what is on the Internet. Think of embedded AI, for example, robotics, intelligent robotics, autonomous vehicles, etc. So not all of AI is on the Internet. There may be some inspiration AI governance can take from the principles of Internet governance. But I think there are numerous issues related to AI governance which cannot be, if you like, taken over from Internet governance, which are specific to AI, which have sort of characteristics where you don’t find any matching aspects in Internet governance. So I would personally see those as broadly different with potentially some inspiration for AI governance taken from Internet governance.


Yoichi Iida: Thank you very much. I would broadly agree with him. The only thing that I would say is that AI and Internet are two different things.


Abhishek Singh: AI includes a lot more than Internet, as he mentioned. Use cases also and input wise also, as rightly mentioned. I am the co-founder and co-director of IIT Bombay and I am here to talk about AI Governance and how it can be improved. So, AI Governance is a multi-stakeholder organization which is controlled by a few corporations there. So, in order to make it more equitable and bring in the principles of Internet Governance to AI Governance, it will have to be multi-stakeholder. It will have to ensure that the way we approach towards managing AI Governance as more inclusive, it involves people who are technology providers as also people who are technology users. And when we are able to do that balance, we will be able to make it more fair, more balanced, more equitable and this will require a lot more of global partnership than what Internet Governance has done so far. But the frameworks and the mechanism, the protocols which Internet Governance Forum has evolved can be a good guiding light for working on the AI Governance principles.


Ansgar Koene: Maybe if I can just add one additional perspective perhaps which I think links closely to what Yoha mentioned as one of the themes that has been picked up across so many of the Governance approaches around AI which is the risk-based approach. Within AI, it is very much the risk depends on the use case, whereas because AI is a core kind of technology that you can use in so many different kinds of applications and application spaces, whereas the Internet in that sense is more of a uniform kind of thing. Any more?


Yoichi Iida: Okay. So, thank you very much. Time is up, but I hope you enjoyed the discussion and please send the uploads to the excellent speakers. Actually, this is too excellent to close now, but time is up, but thank you very much. You don’t believe, you know, they are giving the questions only in midnight yesterday. And we must also acknowledge the presence of His Excellency, the President of Mauritius who is there. Listen to him. Great. Thank you very much, His Excellency. Thank you. Okay. Thank you for watching.


Y

Yoichi Iida

Speech speed

112 words per minute

Speech length

2037 words

Speech time

1083 seconds

Japan initiated international AI governance discussions in 2016, leading to OECD AI principles (2019), Global Partnership on AI (2020), and the Hiroshima process responding to generative AI challenges

Explanation

Japan started international discussions on AI governance at G7 and OECD in 2016, which led to the first international and intergovernmental principles. This foundation enabled subsequent developments including the Global Partnership on AI launch and the Hiroshima process to address new challenges from generative AI technologies.


Evidence

OECD AI principles agreed in 2019, Global Partnership on AI launched in 2020, G7 Hiroshima process code of conduct and guiding principles agreed by end of year, reporting framework launched in 2024 with 20 reports by AI companies published on OECD website on April 22nd


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory


L

Lucia Russo

Speech speed

131 words per minute

Speech length

1146 words

Speech time

522 seconds

OECD has evolved from establishing principles in 2019 to providing policy guidance and analytical work, with three strategic pillars: moving from principles to practice, providing metrics through AI policy observatory, and promoting inclusive international cooperation

Explanation

The OECD serves as a convener of countries and multi-stakeholder groups, providing evidence-based understanding of AI risks and opportunities. The organization has developed three main strategic approaches to support implementation of AI principles through practical guidance and international cooperation.


Evidence

OECD AI policy observatory provides trends and data plus database of national AI policies, Global Partnership on AI and OECD merged in July 2024 creating 44 members including six non-OECD countries (India, Serbia, Senegal, Brazil, Singapore), expert community supporting work


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory | Development


Agreed with

– Abhishek Singh
– Melinda Claybaugh
– Ansgar Koene

Agreed on

Moving from principles to practice is the critical next step in AI governance


OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions

Explanation

The toolkit will be an online interactive tool allowing government representatives to consult a database of national AI policies through guided interaction. It will help countries understand where they need to act and provide policy suggestions from other countries with similar advancement levels or regional contexts.


Evidence

Toolkit approved by ministerial council meeting at OECD in June, will cover both values-based principles and policy areas including compute capacity, data availability, research and development resources, database covers 72 jurisdictions on national strategies


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory | Development


The Global Partnership on AI merger with OECD expanded membership to 44 countries including six non-OECD members, broadening geographic scope for more inclusive conversations

Explanation

The merger achieved in July 2024 was a key milestone that broadened the geographic scope beyond OECD members to include developing countries. This expansion aims to foster more effective and inclusive conversations with a broader range of stakeholders.


Evidence

44 members total with six non-OECD countries: India, Serbia, Senegal, Brazil, Singapore, with expectation that broader geographic scope will continue to increase


Major discussion point

Inclusivity and Global South Participation


Topics

Development | Legal and regulatory


Agreed with

– Abhishek Singh
– Juha Heikkila
– Melinda Claybaugh

Agreed on

Need for inclusive international cooperation and avoiding fragmentation


A

Abhishek Singh

Speech speed

196 words per minute

Speech length

1445 words

Speech time

441 seconds

AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives

Explanation

Currently, AI power is concentrated in few companies and countries, requiring democratization to make Global South countries true stakeholders. This involves providing access to compute resources, ensuring training datasets are more inclusive of global contexts, and building institutional frameworks for funding and adoption.


Evidence

90% of compute controlled by one company, need access to at least 50,000 GPUs in India, high-end H100s and H200s made available at less than $1 per GPU per hour in India, AI Action Summit concept of current AI requiring financial commitments, India hosting AI Impact Summit in February next year


Major discussion point

Inclusivity and Global South Participation


Topics

Development | Infrastructure


Agreed with

– Lucia Russo
– Juha Heikkila
– Melinda Claybaugh

Agreed on

Need for inclusive international cooperation and avoiding fragmentation


Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions

Explanation

While principles are largely agreed upon globally, the challenge lies in translating these into operational actions. This requires building regulatory capacity to test AI solutions against standards and benchmarks, and developing practical tools for implementation.


Evidence

Principles agreed at every forum with same language, need for regulatory capacity to test solutions against standards and benchmarks, requirement for tools for watermarking AI content and frameworks for social media companies to prevent misinformation


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory | Development


Agreed with

– Lucia Russo
– Melinda Claybaugh
– Ansgar Koene

Agreed on

Moving from principles to practice is the critical next step in AI governance


Global Digital Compact should focus on operational level implementation, capacity building networks, and enhanced cooperation on regulatory tools rather than just principles

Explanation

The Global Digital Compact should move beyond principle-setting to address practical operational needs. This includes establishing capacity building networks, enhancing regulatory cooperation, and creating frameworks for skill development across all countries.


Evidence

Global Digital Compact document mentions capacity building initiative and setting up capacity building network, need for skills and capacities development in all countries, requirement for enhanced cooperation on regulatory capacity building


Major discussion point

International Cooperation and Framework Coordination


Topics

Development | Legal and regulatory


India requires access to high-end compute resources, more inclusive training datasets, and global repository of AI solutions to enable equitable AI development

Explanation

India’s strategy focuses on using AI for enabling access to services for all citizens in all languages, particularly through voice interfaces. This requires practical access to compute resources, datasets that reflect global diversity, and shared AI solutions.


Evidence

Request for access to at least 50,000 GPUs, H100s and H200s available at less than $1 per GPU per hour in India, models primarily developed in West and China need training on global datasets, concept of global depository of AI solutions similar to DPI ecosystem model


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Infrastructure | Development


AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations

Explanation

AI governance can learn from Internet governance frameworks and mechanisms, but requires more extensive global partnership due to the concentrated control of AI technology. The approach should be multi-stakeholder, involving both technology providers and users to achieve fairness and equity.


Evidence

AI controlled by few corporations, need for balance between technology providers and users, Internet Governance Forum protocols and mechanisms can serve as guiding light for AI governance principles


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory | Development


Agreed with

– Juha Heikkila
– Ansgar Koene

Agreed on

AI governance differs significantly from Internet governance


Disagreed with

– Juha Heikkila

Disagreed on

Scope and nature of AI governance compared to Internet governance


J

Juha Heikkila

Speech speed

149 words per minute

Speech length

1277 words

Speech time

511 seconds

EU’s AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly environment

Explanation

The AI Act takes a risk-based approach, only intervening where necessary for harmful, dangerous, or risky uses of AI. This creates a level playing field for all entities placing AI systems on the EU market regardless of origin, while avoiding excessive regulation that could stifle innovation.


Evidence

About 80-85% of AI systems would be unaffected by the Act, applies equally to European, Asian, American companies, prevents fragmentation by creating uniform rules across EU instead of patchwork of member state regulations


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory


EU engages bilaterally and multilaterally to support global level playing field for trustworthy AI, participating in G7, G20, Global Partnership on AI, and various summits while ensuring compatibility with EU strategy

Explanation

The EU’s international engagement is built on three pillars: trust/regulation, excellence/innovation, and international cooperation. The EU participates in all key international discussions to promote responsible stewardship and democratic governance of AI while ensuring alignment with its own regulatory framework.


Evidence

Founding member of Global Partnership on AI, involved in G7 Hiroshima process, G20 initiatives, Network of AI Safety Institutes, summits at Bletchley, Seoul, Paris, upcoming India summit, Global Digital Compact participation


Major discussion point

International Cooperation and Framework Coordination


Topics

Legal and regulatory


Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining

Explanation

While there appears to be a chaotic multiplication of AI governance efforts, common elements like risk-based approaches appear consistently across different frameworks. This suggests underlying agreement but highlights the need for better coordination and streamlining of efforts.


Evidence

Risk-based approach reflected in AI Act, G7 Hiroshima process guiding principles and code of conduct, and other summit statements, integrated partnership between Global Partnership on AI and OECD as example of streamlining


Major discussion point

International Cooperation and Framework Coordination


Topics

Legal and regulatory


Agreed with

– Ansgar Koene

Agreed on

Risk-based approach as a common foundation across AI governance frameworks


AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics

Explanation

While AI may take some inspiration from Internet governance principles, AI encompasses much more than what operates on the Internet. AI includes embedded systems, robotics, and autonomous vehicles that have characteristics not found in Internet governance, requiring specific approaches.


Evidence

Examples of non-Internet AI: embedded AI, robotics, intelligent robotics, autonomous vehicles, numerous AI-specific issues without matching aspects in Internet governance


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory | Infrastructure


Agreed with

– Abhishek Singh
– Ansgar Koene

Agreed on

AI governance differs significantly from Internet governance


Disagreed with

– Abhishek Singh

Disagreed on

Scope and nature of AI governance compared to Internet governance


M

Melinda Claybaugh

Speech speed

157 words per minute

Speech length

864 words

Speech time

328 seconds

The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation

Explanation

There has been an incredible proliferation of frameworks, principles, and codes in AI governance showing strong international cooperation. However, the focus should now shift to connecting these frameworks rather than continuing to create new principles, to avoid the risk of fragmentation for global technology deployment.


Evidence

Similarity and coherence of approach across various high-level frameworks, challenge of running global technology across fragmented regulatory approaches


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory


Agreed with

– Lucia Russo
– Abhishek Singh
– Juha Heikkila

Agreed on

Need for inclusive international cooperation and avoiding fragmentation


The focus should shift from establishing more principles to translating existing frameworks into actionable measures for companies, countries, and policy institutions

Explanation

Moving from principle to practice involves translating frameworks into concrete measures for companies, helping countries implement AI solutions for public challenges, and providing policy institutions with practical toolkits. This includes ensuring countries have necessary infrastructure like energy policy, research capabilities, and compute power.


Evidence

Need for energy policy, scientific infrastructure, research infrastructure, data, compute power for countries to leverage AI, broadening conversation beyond early principles to include benefits alongside risk minimization


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory | Infrastructure


Agreed with

– Lucia Russo
– Abhishek Singh
– Ansgar Koene

Agreed on

Moving from principles to practice is the critical next step in AI governance


Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South

Explanation

The Hiroshima AI principles and process were important in ensuring focus on maximizing benefits alongside minimizing risks. This requires expanding the conversation to include more stakeholders, particularly civil society and Global South participants, to achieve tangible impacts.


Evidence

Hiroshima AI principles focus on maximizing benefits as well as minimizing risks, need to include civil society and Global South in conversations, AI Impact Summit as example of inclusive stakeholder engagement


Major discussion point

Inclusivity and Global South Participation


Topics

Development | Legal and regulatory


UN Scientific Panel on AI and global dialogue on AI governance should avoid duplicating existing efforts while providing independent scientific research and convening power

Explanation

The UN’s role should focus on providing independent scientific research through the Scientific Panel and using its convening power for global dialogue on AI governance. However, this should be done carefully to avoid duplicating existing international efforts and initiatives.


Evidence

UN Scientific Panel on AI as independent scientific body, global dialogue on AI governance through UN forums, importance of convening power for bringing right stakeholders together


Major discussion point

International Cooperation and Framework Coordination


Topics

Legal and regulatory


Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption

Explanation

Three key areas for moving from principles to practice include developing comprehensive policy toolkits for countries, creating centralized libraries of AI evaluations and benchmarks, and maintaining ongoing global scientific dialogue. These resources help countries advance their AI adoption capabilities.


Evidence

OECD well-placed to build policy toolkits, need for libraries of evaluations and benchmarks and third-party testing resources, importance of continuing global scientific conversation


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Development | Legal and regulatory


A

Ansgar Koene

Speech speed

151 words per minute

Speech length

884 words

Speech time

349 seconds

Companies need concrete governance frameworks to assess reliability and understand boundary conditions for mission-critical AI applications, with global initiatives providing both direct guidance and indirect harmonization across jurisdictions

Explanation

As organizations move from exploring AI to implementing it in mission-critical applications, they need confidence in governance frameworks that help assess AI system reliability and understand operational boundaries. Global initiatives provide direct guidance through principles and indirect benefits by helping countries create compatible regulations.


Evidence

Organizations moving from test cases to mission-critical applications where AI failure has significant impact, need to understand boundary conditions and provide correct usage information, OECD AI principles and G7 code of conduct providing foundation for organizational governance thinking


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory


Agreed with

– Lucia Russo
– Abhishek Singh
– Melinda Claybaugh

Agreed on

Moving from principles to practice is the critical next step in AI governance


Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers

Explanation

Effective AI governance implementation requires supporting broader participation in standards development, creating reliable and repeatable assessments, and building an ecosystem of assessment providers. This includes providing transparency about what assessments actually test and building community capacity for evaluation.


Evidence

OECD Incidence database helping understand real AI failures versus hypothetical ones, interesting work in jurisdictions like UK on building assessment ecosystems, need for expectation management so users understand assessment scope


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Legal and regulatory | Digital standards


Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific

Explanation

AI governance complexity stems from the fact that AI is a core technology applicable across many different use cases, where risk depends heavily on the specific application. This contrasts with Internet governance, which deals with a more uniform technology platform.


Evidence

Risk-based approach picked up across many AI governance frameworks, AI risk depends on use case while Internet is more uniform technology


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory


Agreed with

– Juha Heikkila
– Abhishek Singh

Agreed on

AI governance differs significantly from Internet governance


A

Audience

Speech speed

122 words per minute

Speech length

51 words

Speech time

25 seconds

AI governance should learn from Internet governance experiences while recognizing the differences between the two domains

Explanation

The audience member from University of Takyushu questioned how AI governance compares to Internet governance, noting that when the Internet was spreading globally there were various challenges. This suggests interest in applying lessons learned from Internet governance to the emerging field of AI governance.


Evidence

Reference to challenges faced during global Internet expansion


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory


Agreements

Agreement points

Moving from principles to practice is the critical next step in AI governance

Speakers

– Lucia Russo
– Abhishek Singh
– Melinda Claybaugh
– Ansgar Koene

Arguments

OECD has evolved from establishing principles in 2019 to providing policy guidance and analytical work, with three strategic pillars: moving from principles to practice, providing metrics through AI policy observatory, and promoting inclusive international cooperation


Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions


The focus should shift from establishing more principles to translating existing frameworks into actionable measures for companies, countries, and policy institutions


Companies need concrete governance frameworks to assess reliability and understand boundary conditions for mission-critical AI applications, with global initiatives providing both direct guidance and indirect harmonization across jurisdictions


Summary

All speakers agree that while AI governance principles have been established across various frameworks, the urgent need now is to translate these principles into practical, actionable measures that can be implemented by companies, governments, and institutions


Topics

Legal and regulatory | Development


Need for inclusive international cooperation and avoiding fragmentation

Speakers

– Lucia Russo
– Abhishek Singh
– Juha Heikkila
– Melinda Claybaugh

Arguments

The Global Partnership on AI merger with OECD expanded membership to 44 countries including six non-OECD members, broadening geographic scope for more inclusive conversations


AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives


Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining


The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation


Summary

Speakers unanimously agree on the importance of inclusive international cooperation that brings Global South countries into decision-making processes while avoiding fragmentation through better coordination of existing frameworks


Topics

Legal and regulatory | Development


Risk-based approach as a common foundation across AI governance frameworks

Speakers

– Juha Heikkila
– Ansgar Koene

Arguments

Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining


Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific


Summary

Both speakers recognize that risk-based approaches have emerged as a consistent element across different AI governance frameworks, providing common ground despite the complexity of AI applications


Topics

Legal and regulatory


AI governance differs significantly from Internet governance

Speakers

– Juha Heikkila
– Abhishek Singh
– Ansgar Koene

Arguments

AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics


AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations


Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific


Summary

Speakers agree that while AI governance can learn from Internet governance principles, AI presents unique challenges requiring different approaches due to its broader applications beyond the Internet and concentrated control structure


Topics

Legal and regulatory | Infrastructure


Similar viewpoints

Both speakers emphasize the critical importance of including Global South countries and underrepresented stakeholders in AI governance discussions, moving beyond risk-focused conversations to include benefits and ensuring equitable access to AI resources

Speakers

– Abhishek Singh
– Melinda Claybaugh

Arguments

AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives


Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South


Topics

Development | Legal and regulatory


Both speakers advocate for developing comprehensive toolkits and resource libraries that provide practical guidance for implementing AI governance principles, with OECD being well-positioned to lead this effort

Speakers

– Lucia Russo
– Melinda Claybaugh

Arguments

OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions


Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption


Topics

Development | Legal and regulatory


Both speakers stress the need for building regulatory and assessment capacity, including tools for testing AI systems and transparent evaluation methods that can be implemented by regulatory bodies

Speakers

– Abhishek Singh
– Ansgar Koene

Arguments

Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions


Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers


Topics

Legal and regulatory | Digital standards


Unexpected consensus

Innovation-friendly regulation approach

Speakers

– Juha Heikkila
– Melinda Claybaugh

Arguments

EU’s AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly environment


The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation


Explanation

It’s unexpected that a major tech company representative (Melinda) and EU regulator (Juha) would find such strong alignment on the innovation-friendly nature of regulation, with both emphasizing that current approaches avoid stifling innovation while providing necessary safeguards


Topics

Legal and regulatory


Streamlining and avoiding duplication of international efforts

Speakers

– Juha Heikkila
– Melinda Claybaugh
– Abhishek Singh

Arguments

Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining


UN Scientific Panel on AI and global dialogue on AI governance should avoid duplicating existing efforts while providing independent scientific research and convening power


Global Digital Compact should focus on operational level implementation, capacity building networks, and enhanced cooperation on regulatory tools rather than just principles


Explanation

Unexpected consensus among government representatives from different regions (EU, US company, India) on the need to streamline international AI governance efforts rather than create more frameworks, showing pragmatic alignment across different stakeholder types


Topics

Legal and regulatory


Overall assessment

Summary

The discussion reveals strong consensus on key foundational issues: the urgent need to move from principles to practical implementation, the importance of inclusive international cooperation that brings Global South countries into decision-making, the adoption of risk-based approaches as common ground, and recognition that AI governance requires different approaches than Internet governance. There is also unexpected alignment between regulators and industry on innovation-friendly approaches and the need to streamline rather than proliferate international frameworks.


Consensus level

High level of consensus with significant implications for AI governance development. The alignment suggests that despite different stakeholder perspectives, there is substantial agreement on both the direction and methodology for advancing global AI governance. This consensus provides a strong foundation for coordinated international action, particularly in developing practical implementation tools, building inclusive frameworks, and avoiding regulatory fragmentation. The agreement spans both procedural aspects (how to govern) and substantive priorities (what to focus on), indicating mature understanding of the challenges and realistic pathways forward.


Differences

Different viewpoints

Scope and nature of AI governance compared to Internet governance

Speakers

– Juha Heikkila
– Abhishek Singh

Arguments

AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics


AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations


Summary

Juha emphasizes the fundamental differences between AI and Internet governance due to AI’s broader scope beyond Internet applications, while Abhishek focuses on adapting Internet governance principles to AI while addressing the concentration of corporate control


Topics

Legal and regulatory | Infrastructure


Unexpected differences

Limited disagreement on fundamental AI governance principles despite different jurisdictional approaches

Speakers

– All speakers

Arguments

Various arguments about implementation approaches but consistent agreement on core principles


Explanation

Surprisingly, there was minimal fundamental disagreement among speakers from different regions (EU, India, OECD, private sector) on core AI governance principles, with most differences being about implementation methods rather than underlying goals


Topics

Legal and regulatory


Overall assessment

Summary

The discussion showed remarkably low levels of fundamental disagreement, with most differences centered on implementation approaches rather than core principles. The main areas of difference were: technical approaches to capacity building, the relationship between AI and Internet governance, and specific mechanisms for Global South inclusion.


Disagreement level

Low to moderate disagreement level with high consensus on principles but varying approaches to implementation. This suggests strong foundation for international cooperation but potential challenges in coordinating diverse implementation strategies across different jurisdictions and stakeholder groups.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of including Global South countries and underrepresented stakeholders in AI governance discussions, moving beyond risk-focused conversations to include benefits and ensuring equitable access to AI resources

Speakers

– Abhishek Singh
– Melinda Claybaugh

Arguments

AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives


Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South


Topics

Development | Legal and regulatory


Both speakers advocate for developing comprehensive toolkits and resource libraries that provide practical guidance for implementing AI governance principles, with OECD being well-positioned to lead this effort

Speakers

– Lucia Russo
– Melinda Claybaugh

Arguments

OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions


Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption


Topics

Development | Legal and regulatory


Both speakers stress the need for building regulatory and assessment capacity, including tools for testing AI systems and transparent evaluation methods that can be implemented by regulatory bodies

Speakers

– Abhishek Singh
– Ansgar Koene

Arguments

Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions


Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers


Topics

Legal and regulatory | Digital standards


Takeaways

Key takeaways

Global AI governance has rapidly evolved from Japan’s 2016 initiative to multiple frameworks (OECD principles, Hiroshima process, EU AI Act, Global Digital Compact), showing remarkable international cooperation but now requiring coordination to avoid fragmentation


The critical phase is moving from establishing principles to practical implementation – companies, governments, and organizations need concrete toolkits, assessment methods, and operational guidance rather than more high-level frameworks


Inclusivity and democratization of AI are essential, particularly ensuring Global South participation through access to compute resources, inclusive datasets, capacity building, and meaningful involvement in decision-making processes


Risk-based approaches have emerged as a common foundation across different frameworks, suggesting convergence potential despite apparent multiplication of governance efforts


AI governance differs fundamentally from Internet governance due to AI’s broader applications beyond Internet-based systems, requiring specialized approaches while potentially adopting multi-stakeholder principles


International cooperation should focus on interoperability between different jurisdictional approaches while respecting diverse national contexts and regulatory frameworks


Resolutions and action items

OECD to develop an interactive online toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions


Continue the Hiroshima AI process reporting framework with companies providing transparent reports on their AI governance practices


Expand Global Partnership on AI membership beyond current 44 countries to increase Global South representation


India to host AI Impact Summit in February focusing on operationalizing inclusive AI governance principles


Build global repository of AI solutions accessible to more countries, similar to the DPI ecosystem model


Develop capacity building networks as outlined in Global Digital Compact implementation


Create libraries of evaluation resources and benchmarks for AI assessment that can be shared globally


Unresolved issues

How to effectively streamline and coordinate the proliferation of AI governance frameworks without losing momentum or excluding stakeholders


Practical mechanisms for ensuring Global South access to high-end compute resources (like H100s, H200s) at affordable costs


Specific implementation details for making AI training datasets more inclusive and representative of global contexts


How to enhance regulatory capacity in developing countries to test and assess AI systems against established standards


Balancing innovation-friendly approaches with necessary safeguards across different jurisdictional frameworks


Defining the exact role and scope of UN Scientific Panel on AI to avoid duplication with existing initiatives


Addressing the concentration of AI development power in few companies and countries while maintaining technological advancement


Suggested compromises

Adopt risk-based approaches that allow different jurisdictions to implement AI governance according to their contexts while maintaining common foundational principles


Focus UN Global Digital Compact discussions on operational implementation rather than creating new principles, building on existing frameworks


Streamline international AI governance forums over time while maintaining the successful integrated partnership model between Global Partnership on AI and OECD


Balance innovation promotion with risk mitigation by focusing governance on specific high-risk AI applications rather than regulating the technology broadly


Use multi-stakeholder approaches from Internet governance while adapting to AI’s unique characteristics and broader application scope


Develop interoperable frameworks that respect different national approaches while ensuring global coordination and knowledge sharing


Thought provoking comments

Currently, the state of the technology is such that the real power of AI is concentrated in a few companies in a few countries. If you have to democratise this, if you have to kind of ensure that the country, the Global South, become a stakeholder in the conversations around, we need to have this principle ingrained in all the countries around the world.

Speaker

Abhishek Singh


Reason

This comment was particularly insightful because it shifted the conversation from abstract governance principles to concrete power dynamics and equity issues. Singh highlighted the fundamental challenge that AI governance isn’t just about creating rules, but about addressing the concentration of technological power and ensuring meaningful participation from developing nations.


Impact

This comment significantly influenced the discussion’s trajectory by introducing the theme of inclusivity and democratization that became central to subsequent speakers’ remarks. It prompted other panelists to address capacity building, resource sharing, and the need for more equitable access to AI technologies. The comment also established the Global South perspective as a critical lens through which to evaluate governance frameworks.


I think at this moment, it’s really important to consider connecting the dots. I think we don’t want to continue down the road of duplication and proliferation and continued putting down of principles… And from a private company’s perspective, the challenge of running this technology and developing and deploying this technology that is global and doesn’t have borders, as we’re all familiar with, is the risk of the fragmentation of approach.

Speaker

Melinda Claybaugh


Reason

This observation was thought-provoking because it challenged the prevailing approach of creating multiple governance frameworks. Claybaugh identified a critical problem: the proliferation of principles without sufficient focus on implementation and interoperability, which creates practical challenges for global technology deployment.


Impact

This comment catalyzed a shift in the discussion from celebrating the various governance initiatives to critically examining their effectiveness and coherence. It introduced the concept of ‘fragmentation risk’ that other speakers then built upon, leading to discussions about streamlining efforts and improving interoperability between different jurisdictions’ approaches.


The AI Act does not regulate the technology in itself, it regulates certain uses of AI. So we have a risk-based approach and it only intervenes where it’s necessary… in fact it’s innovation friendly because about 80% according to our estimate, maybe even 85% of AI systems that we see around would be unaffected by it.

Speaker

Juha Heikkila


Reason

This clarification was insightful because it directly addressed widespread misconceptions about the EU AI Act being overly restrictive. By providing specific statistics and explaining the risk-based approach, Heikkila reframed the narrative around regulation from being innovation-stifling to being targeted and proportionate.


Impact

This comment helped establish a more nuanced understanding of regulatory approaches in the discussion. It influenced subsequent conversations about balancing innovation with safety, and provided a concrete example of how governance can be both protective and innovation-friendly, which other speakers referenced when discussing their own approaches.


We are seeing that especially as more and more of these organizations are moving from exploring possible uses of AI in test cases towards actually building it into mission critical use cases where failure of the AI system will either have a significant impact directly on consumers or citizens… it is becoming very critical for organizations to have the confidence that they have a good governance framework in place.

Speaker

Ansgar Koene


Reason

This comment was particularly valuable because it connected theoretical governance discussions to practical organizational needs. Koene highlighted the evolution from experimental AI use to mission-critical applications, emphasizing why governance frameworks must be reliable and actionable rather than merely aspirational.


Impact

This observation reinforced the ‘principles to practice’ theme that became central to the discussion. It provided concrete justification for why the governance frameworks being discussed matter in real-world implementation, and supported arguments made by other speakers about the need for practical toolkits and assessment mechanisms.


There is some call for streamlining in terms of the number of events and initiatives and forums that we have in the international governance landscape in the area of AI. I think that this kind of multiplication is not necessarily sustainable in the long run.

Speaker

Juha Heikkila


Reason

This was a bold and thought-provoking statement because it challenged the assumption that more governance initiatives are inherently better. Heikkila raised questions about the sustainability and effectiveness of the current proliferation of AI governance forums and frameworks.


Impact

This comment validated and expanded upon Claybaugh’s earlier concerns about fragmentation, creating a consensus around the need for consolidation and better coordination. It influenced the moderator’s closing remarks about the role of IGF and the importance of avoiding duplication, suggesting a potential path forward for more streamlined governance approaches.


Overall assessment

These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a routine overview of governance initiatives into a more sophisticated analysis of systemic challenges. First, Singh’s emphasis on power concentration and Global South inclusion established equity as a central concern, influencing all subsequent speakers to address inclusivity and capacity building. Second, Claybaugh’s observation about fragmentation and the need to ‘connect the dots’ created a critical lens through which other speakers evaluated existing frameworks, leading to discussions about interoperability and streamlining. Third, the collective emphasis on moving ‘from principles to practice’ – reinforced by Koene’s practical perspective and supported by others – shifted the conversation from celebrating existing frameworks to critically examining their implementation challenges. These comments created a more mature, nuanced discussion that acknowledged both the progress made in AI governance and the significant challenges that remain, ultimately pointing toward more coordinated, inclusive, and practically-oriented approaches to global AI governance.


Follow-up questions

How can we ensure researchers in low- and medium-income countries have access to similar compute resources as researchers in Silicon Valley?

Speaker

Abhishek Singh


Explanation

This addresses the digital divide and democratization of AI technology access globally, which is crucial for inclusive AI development


Can we build up a global depository of AI solutions which can be accessible to more countries?

Speaker

Abhishek Singh


Explanation

This would facilitate knowledge sharing and prevent duplication of AI development efforts across different countries


How do we develop tools for watermarking AI content and global frameworks so that social media companies become part of preventing misinformation risks?

Speaker

Abhishek Singh


Explanation

This addresses the growing concern about AI-generated misinformation and deepfakes threatening democratic processes


How do we enhance regulatory capacity for testing AI solutions against standards and benchmarks?

Speaker

Abhishek Singh


Explanation

This is critical for ensuring AI systems meet safety and trustworthiness requirements before deployment


How do we connect different AI governance frameworks to avoid fragmentation and improve interoperability?

Speaker

Melinda Claybaugh


Explanation

This addresses the proliferation of different AI governance approaches that could create compliance challenges for global AI deployment


How do we expand the conversation beyond risks to include benefits and involve more stakeholders from civil society and the Global South?

Speaker

Melinda Claybaugh


Explanation

This ensures AI governance discussions are balanced and inclusive of diverse perspectives and use cases


How do we build reliable, repeatable assessments for AI systems implementation and governance frameworks?

Speaker

Ansgar Koene


Explanation

This is essential for providing end-users with confidence and trust in AI systems through standardized evaluation methods


How do we streamline the multiplication of AI governance efforts and forums to avoid duplication?

Speaker

Juha Heikkila


Explanation

The current landscape has numerous overlapping initiatives that may not be sustainable long-term and could lead to inefficiencies


How can principles of Internet governance be applied to AI governance, considering AI includes more than just Internet-based applications?

Speaker

Shinichiro Terada (audience member)


Explanation

This explores whether existing governance models can be adapted for AI, while recognizing the unique challenges AI presents beyond Internet governance


How do we make AI governance more multi-stakeholder and inclusive like Internet governance, while addressing the concentration of AI power in few corporations?

Speaker

Abhishek Singh (in response to audience question)


Explanation

This addresses the need for more democratic and distributed approaches to AI governance to prevent monopolization


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #190 Judging in the Digital Age Cybersecurity Digital Evidence

WS #190 Judging in the Digital Age Cybersecurity Digital Evidence

Session at a glance

Summary

This discussion focused on “Judging in the Digital Age: Cybersecurity and Digital Evidence,” examining how courts worldwide are adapting to function as digital ecosystems where evidence, records, and hearings increasingly exist online. Dr. Naza Nicholas from Tanzania’s Internet Society opened the session by explaining the initiative’s goal to bridge the gap between judiciary systems and internet governance spaces, building on previous efforts since 2022 to bring judges into digital rights discussions.


Judge Eliamani Laltaika from Tanzania’s High Court outlined the five key considerations courts use when evaluating digital evidence: relevance, authenticity, system integrity, chain of custody, and statutory compliance. He emphasized that these principles apply regardless of whether evidence originates domestically or internationally, and noted that everyone creates digital evidence through daily activities like taking photos or using messaging apps.


Professor Peter Swire from Georgia Tech highlighted three critical areas where digital evidence differs from traditional evidence: authentication challenges in verifying identity, maintaining chain of custody through digital signatures and hash functions, and addressing AI hallucinations where artificial intelligence systems may generate false citations or information. He recommended implementing two-factor authentication and systematic verification of AI-generated content.


The discussion addressed significant challenges including spyware surveillance, with Dr. Jacqueline Pegato from Data Privacy Brazil citing cases where surveillance tools were used against activists and even Supreme Court justices. Advocate Umar Khan from Pakistan emphasized the need for balanced surveillance that protects both security and privacy rights, while Marin Ashraf from IT4Change discussed specific challenges in prosecuting online gender-based violence cases, particularly regarding evidence authentication and platform cooperation.


Participants identified critical gaps including outdated legislation, insufficient judicial training in cybersecurity, and the need for better international cooperation frameworks. The session concluded with calls for continued capacity building, multi-stakeholder dialogue, and systematic reforms to ensure courts can effectively handle digital evidence while protecting fundamental rights in an increasingly connected world.


Keypoints

## Major Discussion Points:


– **Digital Evidence Authentication and Chain of Custody**: The panel extensively discussed the five key considerations for admitting digital evidence in courts: relevance, authenticity, system integrity, chain of custody, and statutory compliance. Speakers emphasized the challenges of verifying digital evidence, especially when it originates from different jurisdictions or involves AI-generated content.


– **Cross-Border Legal Cooperation and Jurisdictional Challenges**: Multiple speakers addressed the complexities of handling digital evidence that crosses international boundaries, discussing the need for legal harmonization, mutual legal assistance treaties, and standardized procedures for accessing data from foreign jurisdictions while respecting data protection laws.


– **State Surveillance vs. Individual Rights**: The discussion covered the tension between legitimate law enforcement needs for digital surveillance and cybersecurity measures versus protecting individual privacy rights and ensuring fair trials. The Brazilian spyware case and Pakistani digital rights experiences were highlighted as examples of this ongoing challenge.


– **Online Gender-Based Violence and Platform Accountability**: Speakers examined the specific challenges courts face when dealing with online gender-based violence cases, including difficulties in obtaining digital evidence from platforms, ensuring survivor privacy, and addressing algorithmic amplification of harm.


– **Judicial Capacity Building and Training Gaps**: A recurring theme was the urgent need for specialized training for judges and legal professionals in cybersecurity, digital forensics, AI literacy, and data protection to keep pace with rapidly evolving technology and emerging forms of digital crime.


## Overall Purpose:


The session aimed to bridge the gap between the judiciary and the internet governance community by creating a permanent platform for dialogue and exchange. The goal was to bring judges into the Internet Governance Forum space, break down institutional silos, and equip judicial systems with the knowledge and tools needed to handle digital evidence and cybersecurity challenges in the modern era.


## Overall Tone:


The discussion maintained a collaborative and educational tone throughout, characterized by mutual respect among panelists from different jurisdictions and backgrounds. The atmosphere was constructive and forward-looking, with speakers sharing practical experiences and concrete recommendations. There was a sense of urgency about addressing the digital knowledge gap in judicial systems, but the tone remained optimistic about the potential for capacity building and international cooperation. The session concluded on an encouraging note, with participants expressing commitment to continued collaboration and training initiatives.


Speakers

**Speakers from the provided list:**


– **Naza Nicholas** – Dr. Naza Nicholas Kirama from Tanzania, works with Internet Society Tanzania Chapter, coordinator for the Tanzania Internet Governance Forum


– **Eliamani Isaya Laltaika** – Honorable Dr. Eliamani Isaya Laltaika, sitting judge of High Court of Tanzania


– **Peter Swire** – Professor at Georgia Tech, law professor teaching in the College of Computing, leader of the Cross-Border Data Forum, expert on cross-border data and law enforcement access issues


– **Jacqueline Pegato** – Works with Data Privacy Brazil, a Brazilian NGO focused on digital rights in Brazil and the Global South


– **Umar Khan** – Advocate of the high court in Pakistan, digital rights and defense lawyer, works on cyber cases in Pakistan


– **Marin Ashraf** – Senior research associate at IT4Change (India-based not-for-profit organization), works on online gender-based violence, digital platform accountability, information integrity, and AI governance issues


– **Adriana Castro** – Professor at External University in Colombia


– **Participant** – (Role/expertise not specified in transcript)


**Additional speakers:**


None identified beyond the provided speakers names list.


Full session report

# Judging in the Digital Age: Cybersecurity and Digital Evidence – Discussion Report


## Introduction and Context


The session “Judging in the Digital Age: Cybersecurity and Digital Evidence” aimed to bridge the gap between judicial systems and internet governance communities. Dr. Naza Nicholas from Tanzania’s Internet Society opened by explaining that this initiative began in 2023 in Japan, building on efforts to bring judges into digital rights discussions and create dialogue platforms between the judiciary and multi-stakeholder internet governance spaces.


The panel included Judge Eliamani Isaya Laltaika from Tanzania’s High Court, Professor Peter Swire from Georgia Tech, Dr. Jacqueline Pegato from Data Privacy Brazil, Advocate Umar Khan from Pakistan’s high court, Marin Ashraf from India’s IT4Change, and Professor Adriana Castro from Colombia’s External University. Dr. Nicholas outlined four key questions the session would address: how courts assess digital evidence, balancing surveillance with privacy rights, handling cross-border digital evidence, and protecting court systems from cyber threats.


## Digital Evidence Assessment Framework


Judge Laltaika established the foundational principles courts use when evaluating digital evidence, emphasizing five key considerations: relevance, authenticity, system integrity, chain of custody, and statutory compliance. He noted that “digital evidence assessment follows the same principles regardless of jurisdiction, with no discrimination between domestic and foreign evidence.”


The judge provided historical context, explaining that digital evidence is relatively new in legal development, beginning in the late 1970s in the United States. He made the discussion relevant by observing that “each one of you is currently creating digital evidence or electronic evidence from the pictures you are taking, from your geolocation, from the voices you are sending over WhatsApp.”


## Technical Challenges and AI Concerns


Professor Swire highlighted critical differences between digital and traditional evidence, particularly around authentication and chain of custody. He emphasized that two-factor authentication is significantly more secure than password-based systems and that digital signatures using mathematical hash operations can prove document integrity.


Swire raised concerns about AI-generated content, providing a specific example: “We know that AI can have hallucinations. We’ve seen law cases in the United States where a lawyer just put in a question to the AI system and got back case citations that were not true. They made them up.” He recommended systematic verification of AI-generated content, suggesting judges sample-check citations when full verification isn’t feasible.


## Surveillance and Privacy Rights


Dr. Pegato presented concerning examples of spyware surveillance tools being used against activists and Supreme Court justices in Brazil. She argued that “when surveillance happens outside transparent legal frameworks, courts are sidelined and unable to guarantee fundamental rights they are tasked to protect.” She emphasized that spyware tools “move way beyond traditional investigative methods, such as telephonic interceptions” and require strict judicial oversight.


Advocate Khan offered a different perspective, acknowledging that “surveillance is necessary for cybersecurity but must balance legality, proportionality, and transparency while protecting constitutional rights to privacy and dignity.” This highlighted different approaches to surveillance regulation across jurisdictions.


## Online Gender-Based Violence Challenges


Marin Ashraf addressed specific challenges courts face with online gender-based violence cases. She explained that “digital evidence in online gender-based violence cases often fails to meet burden of proof due to authentication certificate difficulties and lack of platform cooperation.” Her research in India found that “in many cases, even if the prosecution fails to even submit digital evidence, and the main barrier here comes from the lack of cooperation from the digital platforms.”


Ashraf emphasized that courts must understand how platforms and algorithms can amplify harms against survivors, arguing for “ecosystem-level changes in sensitisation and inclusive policies for handling online violence cases.”


## Cross-Border Evidence and Jurisdictional Issues


The panel discussed complexities of handling digital evidence crossing international boundaries. Judge Laltaika noted that “international collaboration mechanisms are necessary for data exchange and extraterritorial expertise in cybercrime cases.” Professor Castro raised practical concerns about “notification and contact information procedures present ongoing challenges in data protection investigations.”


Dr. Nicholas emphasized that “legal harmonisation across jurisdictions is needed to handle digital evidence uniformly and share best practices from diverse legal systems,” highlighting the need for standardized procedures accommodating the borderless nature of digital evidence.


## Judicial Training and Capacity Building


A recurring theme was the urgent need for specialized training. Judge Laltaika acknowledged that “many courts still operate in physical form without understanding digital evidence, creating risks of wrongful convictions.” He advocated for “capacity building programmes should invite judges to forums and designate specific training programmes.”


Dr. Pegato reinforced this, stating that “judges need continuous specialised training in cybersecurity, digital forensics, and data protection to address knowledge gaps.” Advocate Khan noted that “legal frameworks need updating as outdated laws from 2015 cannot adequately address 2025 digital crimes.”


## Cybersecurity for Court Systems


The discussion revealed different approaches to protecting judicial systems from cyber threats. Professor Swire advocated for courts having independent backup systems, arguing that “courts need backup systems and offline storage to protect against ransomware attacks that could lock up judicial files.”


Judge Laltaika presented an alternative view, suggesting that “courts can leverage government data centres for security standards rather than operating in isolation.” He explained that “the judiciary does not operate in silo… we are part of the government. So the standard of security that applies to records of parliament or state house applies to the court as well.”


## Data Protection in Legal Proceedings


The panel addressed balancing transparency in legal proceedings with privacy protection. Dr. Pegato noted that “Brazil has constitutional right to data protection and comprehensive LGPD law but lacks criminal data protection framework,” highlighting legislative gaps even in jurisdictions with advanced privacy laws.


Professor Swire mentioned practical solutions such as protective orders for handling sensitive information, while Professor Castro reinforced the complexity of these issues in cross-border contexts where different privacy regimes must be reconciled.


## Key Implementation Challenges


Several critical challenges emerged from the discussion. Platform cooperation remains problematic, with companies often unresponsive to law enforcement requests for digital evidence. Resource constraints limit courts’ ability to develop comprehensive cybersecurity infrastructure, particularly in developing countries.


Authentication certificate difficulties create barriers to justice access, especially when complainants lack computer resources. Outdated legal frameworks struggle to address rapidly evolving digital crimes and technologies, creating persistent gaps between legal capabilities and technological realities.


## Proposed Solutions


The panel identified concrete steps for addressing these challenges. Continuing dialogue between multi-stakeholder communities and judiciary through IGF sessions was seen as essential. Developing technical and legal standards for digital chain of custody, including metadata preservation and authentication layers, emerged as a priority.


Establishing systematic training programmes for judges in cybersecurity and digital forensics through judicial academies was universally supported. Creating multi-stakeholder dialogue platforms among courts, technologists, civil society, and policymakers was identified as crucial for collaborative solutions.


Updating legal frameworks to address contemporary digital crimes and implementing cybersecurity protocols for courts were emphasized as urgent needs.


## Conclusion


The session successfully brought together diverse stakeholders to address common challenges courts face in adapting to digital evidence and cybersecurity threats. While speakers represented different legal traditions and perspectives, they shared recognition of the urgent need for judicial capacity building, proper digital evidence procedures, and balanced approaches to surveillance and privacy rights.


The discussion established a foundation for ongoing collaboration between traditionally separate communities, demonstrating that technological challenges create opportunities for judicial reform across diverse legal systems. The commitment to continued dialogue and capacity building provides a pathway for ensuring justice systems can effectively serve populations in an increasingly digital world while protecting fundamental rights.


Session transcript

Naza Nicholas: Thank you so much, and you’re welcome to this session, Judging in the Digital Age, Cybersecurity and Digital Evidence. And we are on Channel 5. If you would take your equipment and turn it on, put it on Channel 5. Today we have a number of speakers from various jurisdictions in terms of our IGF, you know, segments. And can I have this slide, please? Thank you. My name is Dr. Naza Nicholas Kirama from Tanzania. I work with Internet Society Tanzania Chapter, and I also double up as the coordinator for the Tanzania Internet Governance Forum. And today we are going to have a very good session on Judging in the Digital Age, Cybersecurity and Digital Evidence. And why are we here? We are here because courts globally are now digital ecosystems, and evidence records and even hearings actually exist online. Digital evidence is central to more than cases, mobile data, emails, metadata, surveillance footage, blockchain logs, AI-generated content. We have things like… Welcome to the session on digital rights online. We have been working tirelessly since 2023, 2022, to bring the judiciary, especially judges, to the Internet Governance Space, and it started in 2023. In Japan, we had a session called Judges on Digital Rights Online. And the goal is to break the silos, to link judiciary with the Internet Governance Space, and to create a platform for dialogue and exchange. We are working with experts, technologists, and policemakers, and not forgetting the regular Internet users. This session builds on the momentum by creating a permanent platform for dialogue and exchange. We are not just talking tech, we are reshaping the judicial culture for the future. We hope to strengthen the social Islam dynamics in the world and reach different levels of accessibility of the electronic device, in the regulatory commission, in opposing and expressing an open commentary in the district. In January, we launched a new report on cybersecurity in the IQ and we are trying to structure this report. How do we protect institutions? That is question number three. AI and justice. Can we trust machines, machine learning in evidence analysis or sentencing? Question number four. Training gaps. Judges need continuous specialized training to keep up with emerging tech and things like AI. The next slide is about our vision. We need to continue to be resilient, digitally literate, to have those in the judicial system. We need to build legal harmonization across jurisdictions to handle digital evidence uniformly. Share best practices from diverse legal systems. Civil, common, and hybrid traditions. And also develop capacity building programs, cyber law, data protection, digital forensic, and AI literacy. Also we foster collaboration between judiciary, civil society, tech developers, and also empower courts not just to catch up, but lead in shaping responsible digital justice. With that introduction, now I take this whole burden to the Honorable Dr. Eliamani Laltaika from the High Court of Tanzania. And the issue of the whole of this that I have been able to talk about. Honorable Eliamani, as a sitting judge of High Court in Tanzania, Are your courts currently addressing challenges related to admissibility of digital evidence, especially when such evidence originates from outside your jurisdiction or lacks clear standard for authentication?


Eliamani Isaya Laltaika: Thank you very much, Dr. Naza. First and foremost, my appreciation to the IGF Secretariat for their willingness and continuous support to engage the judiciary in this very important part of the 21st century legal process. Before I answer your question, Dr. Naza, I would like to kind of unpack some of these concepts. From a legal point of view, cyber security, these processes, legal, policy, economic, social and even diplomatic processes are to keep the cyberspace safe for all users, including children, people with disability and across regions. So that is the whole concept of cyber security is to ensure that the cyberspace is safe for all of us to use. And digital evidence, also known as electronic evidence, this is now the information with probative value presented to a court for a judge to consider in making a decision whether something has happened or has not. And digital evidence or electronic evidence is a newcomer in the development of law and judiciaries all over the world. It started in the late 70s in the U.S. Before that, only hard copies were used to prove something that has happened or not. When courts consider whether to admit evidence or not, there are usually five considerations. And this doesn’t really distinguish whether that piece of evidence is from one’s jurisdiction or it’s from some other country. Number one, relevance. I would ask counsel who is addressing me or I will use my own conviction to judge whether a certain piece of evidence is relevant to the case I’m addressing. If it doesn’t, it’s not relevant, however impressive it is. Number two, authenticity. The evidence must be shown to be what it purports to be. If you are telling me this is a video of someone stabbing a knife on some innocent passerby, I should be able to know that that is actually what is being done. It’s not a cartoon that has been curated. Number three, integrity of the system. I should be able to verify the system from which that video was extracted or that piece of paper or that email was printed out. Number four, chain of custody. I should know who took care of that piece of evidence. How many hands did it change through before it came to my court? And finally, and this is a little bit technical, I would check whether it complies with the statutory requirements. I would like to focus on the evidence of my own country. Every country has its own legal system, its own precedent, its own way of judging evidence. So if a piece of evidence passes that process, there is no discrimination whether it is from my jurisdiction or not. And I would only say that people thought these are only things from the movie, but I can say that these are actual things that are happening. And each one of you is currently creating digital evidence or electronic evidence from the pictures you are taking, from your geolocation, from the voices you are sending over WhatsApp. Everything, the meta tags can be used to authenticate that so-and-so was in Norway at this date and this is what he did. Everything you are doing, from shopping online to walking into a casino, that is actually building some sort of a digital evidence ecosystem. What does this mean in practice? It means cyber security law is much, much, much beyond what many people consider criminal. Law, as I said two years ago in Japan, is not only about punishing people. There are so many roles of the law and I will conclude by this, that law can play a punitive role. So-and-so has done something wrong and must be punished. Law can play a facilitative role.


Naza Nicholas: Can you hear us?


Peter Swire: Can you hear me right now?


Naza Nicholas: Yes, we can hear you loud and clear. If you can spend one minute to introduce yourself.


Peter Swire: and the background. Yes. Okay. And I don’t know if the video is working. Maybe it doesn’t work. Oh, there it goes. Okay. Hello. I’m in Spain today. My name is Peter Swire. I’m a professor at Georgia Tech. My background is a law professor, but I teach in the College of Computing, so I work on these issues. I also work on these issues as the leader of the Cross-Border Data Forum, where we do a lot of research on issues of data going across borders, especially law enforcement access. So that’s a little bit of my background. Is there a specific question you’d like me to address?


Naza Nicholas: Yes. The question I have for you, professor, is what are the key principles or methodologies that judges and lawyers must understand to critically assess the reliability and chain of custody for digital evidence, especially when it is presented through automated or AI-generated tools? You have five minutes.


Peter Swire: Yes, and I’ll try to stay within the time. So first of all, thank you for including me here today. I’m teaching in Spain this summer, and I feel honored to get to participate in this panel. I will turn to your question with just a little bit of background first, because we have resources at the Cross-Border Data Forum that talks about issues of government access to data across borders, such as the Budapest Convention and how to compare it to the new UN Cybercrime Convention. So we have a very recent study on the Cross-Border Data Forum about this. We’ve also written about how regional conventions, such as in Africa, might be useful for having governments get access to law enforcement requests that exist in other countries, because without that access, the United States has a blocking statute, and it’s hard to get the content of email communications for the judges. So that is background. I would like to emphasize three areas where the digital evidence issues are different. But first, I’ll tell you how much the digital issues are the same. So listening to our distinguished judge just now, his principles for evidence, including relevance and dependability, are the principles of evidence that existed before the Internet happened in a very large extent. And so each country has its own. You’ve always faced the problem that maybe this piece of paper has a fake signature on it. Now it might be a fake document electronically, but it’s been the same problem for judges since forever about whether to believe the evidence that comes into court. So for the three things I’d emphasize, the first is authentication. Somebody might say that they are writing from a police agency or a prosecutor’s office, but in fact, they’re faking it. They might be from some other place. And so and this was mentioned by the judge. And so the first thing to trust in evidence is that you’re dealing with the right party, that is the right person sending you the data. And in a world now where passwords can be broken many times, the standard good technology is to have what’s called two factor authentication. And many of you have used this where you log in with a password, then they send you a code and you send the code. And that’s much harder to fake than than simply a password based system. So that’s the first thing, some some confidence you’re dealing with the right people for authentication. The second question is chain of custody and whether you believe the document that came from Alice is the same document that’s received by Bob. And we have well-established procedures, what are called digital signatures. And the basic idea is Alice sends a document and they do a mathematical operation on it called a hash. And this unique number that emerges on the far end. And if even one sentence or one word in the document is changed, the hash of the document changes. And so these digital signatures are a mechanism to prove what left from Alice is the same thing received by. Bobb, such as the court system. And so systems of digital signatures are very important. The third question that’s come up more recently is what about AI? And we know that AI can have hallucinations. We’ve seen law cases in the United States where a lawyer just put in a question to the AI system and got back case citations that were not true. They made them up. Because AI and large language models use predictive technology, not definite technology, when they are trying to send evidence. And so when you receive a set of documents that have been generated by AI or might have been generated by AI, should you believe all the citations? And there’s no perfect answer to this, but one answer to this is to double check the citations. Maybe if you have time, you double check all of the citations. You go to the link in the page and make sure it says what they say it says. And that’s something, when I worked with a judge as a clerk, we did already. We checked to make sure the lawyers were giving us a proper citation. But if there’s too many citations to check in this way, maybe you do a sample. Maybe you try 10, or you try 50, or whatever the number is, and start to see whether you have any fake citations come in. So I think what I’m emphasizing for today is, in many ways, the problems are the same judges have had since forever. But we have to be sure about authentication. Is this really the person? We have to have some assurances on chain of custody and authenticity, and that’s digital signatures. And we have to worry about hallucinations in AI. And that means checking the sources, because otherwise it might be a fake citation that you don’t trust. So I’ll stop there. Thank you very much.


Naza Nicholas: Thank you, Professor. Now we have learned that there is a definitely predictive and definitive citations and all that. And that is why we are bringing the whole court system into the IGF. I want to now go to Dr. Jacqueline, who is here with us. And Dr. Jacqueline, if you would spend the next one minute to introduce yourself and then I will do a question for you.


Jacqueline Pegato: Of course. Thank you so much for having me. My name is Jacqueline Pegato. I’m with Data Privacy Brazil, a Brazilian NGO working with digital rights in Brazil and also in the Global South. It’s a pleasure to be here. Although I’m not a lawyer, I’ll try my best to address your question.


Naza Nicholas: Thank you so much, because we are a multi-stakeholder driven body of the United Nations, the IGF. Now I receive the order from the judge for every speaker to stay within five minutes. Thank you, Professor, for staying within five minutes. It was actually around 3.43 minutes. Now, Dr. Jacqueline, with the rise of things like spyware, the state surveillance tools being used in the name of national security, how should the judiciary respond when such technologies are used together with evidence? What role do courts play in safeguarding rights while navigating cases where surveillance methods themselves may be legally or ethically contested?


Jacqueline Pegato: Thank you. I think I’m going to first bring the concept of spyware for those who are not familiar with it. But I think most of us know by now that spyware refers to surveillance technologies used to secretly extract data from personal devices and networks, often without the user’s knowledge and with minimal legal oversight. So while presented as legitimate tools in the context of national security or law enforcement, these tools are increasingly being used in ways that erode democratic institutions. that I think exemplifies this threat. First Mile, a spyware tool, was deployed by intelligence officials to surveil targets that ranged from activists to Supreme Court justice themselves. So it is a paradigmatic case that stresses the most salient features of this type of surveillance. But it’s only one example in a broader context of lack of oversight. This incident reveals a structural problem. When surveillance happens outside transparent legal frameworks, courts are sidelined and unable to guarantee fundamental rights they are tasked to protect. And in the Brazilian context of use of spyware by the state, there is a Supreme Court case pending in which the regulatory gap that allows for the current state of things is being challenged as unconstitutional. In this case, we argue that the use of different spyware for surveillance by the state should be ruled unconstitutional, since even in possible legitimate contexts of criminal persecution and law enforcement, the nature of how they work and their affordances move way beyond traditional investigative methods, such as telephonic interceptions. Taking advantage of vulnerabilities found in other platforms and networks. resulting in a level of intrusion that is difficult to justify under democratic parameters. However, even if the entire system is not ruled unconstitutional, we are requesting that strict criteria be established for the use of spyware, analogous to the existing regulations for other cases of breach of confidentiality, particularly the requirements of prior judicial authorization and adherence to similar strictness as in other situations of confidentiality breach, the constitutional interpretation of communication confidentiality updated to contemporary standards of intrusiveness, the inclusion of mechanisms to respect the chain of custody, the individualization of subjects subjected to intrusion procedures, and the development of other parameters compatible with the constitutional order. So I’ll stop here now. Thank you.


Naza Nicholas: Thank you, Dr. Jacqueline. That was very informative. And I’m very glad that you could do this submission. Now I go to Advocate Umar Khan from Pakistan. If you can introduce yourself first.


Umar Khan: Thank you so much, Dr. Nazer. Thank you so much, IGF, and the new trick into this IGF somehow following from the last two IGF. This is Umar Khan from Pakistan. I’m basically a high court lawyer and working on the digital rights, dealing cyber cases in Pakistan, which is a bit new thing in Pakistan. Onward, we are having our first national law from 2016 Prevention of Electronics Crime Act. So this is from my side. Yeah, Dr. Nazer. Thank you so much, Umar Khan, advocate of the high court in Pakistan. You have a vast experience as a digital rights and defense lawyer. How do you see the balance between state surveillance for cybersecurity and the individual’s right to fair trial, particularly when digital forensics are used to prosecute cybercrime? Very important question. I think there are two questions within one question, digital forensics and digital surveillance. Basically, to prosecute a digital crime or a cybercrime, it is very much important. Digital surveillance is the same as the government or the state is looking after the general public. Also in the digital world, when the world has become a globe, on just one click there are certain issues related to the people, general masses, are also arising at the same time. So I believe that surveillance is a key. Without it, you cannot go with the internet. Everybody will be doing their own job. Everybody can do a crime. So surveillance is just to monitor, track and collect data. But the main thing is the key, the balance, that how they keep the balance, look into the principle of legality, professionality and with that the transparency. That if the surveillance they are doing, the state agencies are doing, whether it is protecting the rights of the people, along with that, is this not violating the right to privacy, the right to dignity, which are the constitutional rights given to the humans, to the citizens, by the law, by the constitution, or by the Universal Declaration of Human Rights. So I believe that surveillance, data collection, and all these things are important but how the data collected from the end user are protected because we have seen in the world that on the state level the data of the user have been shared so I believe that this is very important. The second one that the important thing is the digital evidence in forensic so by the end of the day if a crime has been committed it is the evidence that has to prove whether the crime has committed or not. The way the professor and the honorable judge has mentioned that the chain of the custody the evidence that reached the court that is very much important and without forensic it is not possible to prove whether it is to be proved in the way that the evidence that has been collected is according to the law is it following the SOPs whether the forensic the legality of the evidence so I believe that whenever it is happening you are prosecuting a crime you are collecting evidence it has to follow the standard of a digital forensic in a way that it has been proved because forgery is very much important is very much easy now AI has become a tool that can create hurdles for the people so it is the state that should ensure the standard of the digital process in the way that a crime that has committed should be prosecuted in a way that the right to a fair trial has not been violated which is very important this is from my side.


Naza Nicholas: Thank you so much I appreciate your intervention and now I go to Marin Ashraf she’s a tech research policy researcher and I would like to ask you to to use the next one to to introduce yourself and that will be followed by a question


Marin Ashraf: yeah thank you Dr. Nasir. Hi everyone my name is Marin and I am a senior research associate at IT4Change which is an India-based not-for-profit organization working at the intersections of digital technology and social justice. My core area of work includes working on online gender-based violence and legal and policy responses to it, digital platform accountability, information integrity, and AI governance issues. Very happy to be here and to share the space with the esteemed panelists.


Naza Nicholas: Thank you so much, Marin. I know you are doing a fantastic job of research and informing the community. Your work experience explores the intersection of tech and governance and social justice. From a feminist legal and policy perspective, how can judicial systems be better equipped to handle cases of online gender-based violence and platform-related harms, especially when evidence is embedded within opaque algorithms and transnational digital ecosystems?


Marin Ashraf: Thank you, Dr. Nasir. To answer that question, I would like to first briefly share some insights from my research that IT4Change, the organization that I work with, undertook on judicial approaches to online gender-based violence cases in India. The challenges that are commonly encountered in prosecuting such cases, especially the digital evidentiary issues. Online gender-based violence is typically dealt as a criminal offense in India under the General Penal Code and the Information Technology Act. As with any criminal offense, it becomes crucial to prove the guilt of the accused person beyond reasonable doubt. In our study, we found that in several cases of online gender-based violence, it unfortunately failed to meet the high burden of proof that is required because of difficulties in bringing in expert testimony, relying on witnesses, and ensuring the admissibility of digital evidence. So under Indian law, digital evidence will be admissible under two conditions. Either it has to be if the original computer source in which the evidence is recorded is produced, and in which case it’s a primary evidence, or if the copies of the electronic record are produced, in which case we need a certificate of authentication. And in our research study, we found that in many cases the court tend to dismiss the digital evidence because of lack of authentication certificate. And the issue is that it’s very difficult to sometimes obtain the authentication certificate, especially if the complainant doesn’t have access to the computer resource, or in many cases they might have applied to obtain the certificate, but it is not issued, or they may not be aware how to get the authentication certificate. So in such cases, the court tend to dismiss, of course, because of concerns of authenticity. And that means depriving a crucial piece of evidence, which might only be the single evidence in many cases, and thus depriving access to justice for the survivors. Now another issue is that in many cases, even if the prosecution fails to even submit digital evidence, and the main barrier here comes from the lack of cooperation from the digital platforms, like social media platforms, in responding to requests from law enforcement agencies to provide information. Despite the police asking for information from the social media platforms or from telecom service providers, sometimes there’s a delay in responding to it. And another issue with respect to digital evidence is the threat to privacy. Because in many cases, the evidence and other materials and the devices has to be submitted to the state or the police, and there have been concerns of manipulation or leaking to the accused side. So there is significant threat to privacy in that regard, the chain of custody, in preserving the chain of custody. Now, apart from the digital evidentiary issues, I also wanted to touch…


Naza Nicholas: If we were to come up with one red flag, judges should look for a digital forensic report. What would that be?


Peter Swire: Well, I hadn’t prepared that question.


Naza Nicholas: Yes.


Peter Swire: I thought you might ask of one piece of advice to judicial systems that they would do. Yes. And I want to mention the problem of ransomware, which is the possibility that a bad actor will try to lock up the files of a court system so that the judges and the courts lose access to the files. And when I teach my cybersecurity class, I say to people, for ransomware, the most important thing is to have some offline backups of your records, if at all possible. And you also have to protect your backup system from attack, because the bad guys try to get into your backup system. We have seen a lot of, in the United States, state and local governments get hit with these attacks. We have seen court systems get hit with these ransomware attacks. And having a good backup, where you can go back and get everything the way it was yesterday, that’s a huge help if you’re able to have that kind of technical backup in place. You asked about red flags. I think that the thing I would worry about is whether somebody on the other end of the line is really who they say they are, right? We know in our personal lives that we think we might be talking to somebody on social media, and it’s somebody else. And so finding some way to have a channel to communicate with them, and a second channel to make sure they are who they really say they are, that kind of two-factor thing is important, because otherwise you might be getting evidence from somebody who’s not even the right person.


Naza Nicholas: Thank you, Professor. Now I go to Dr. Jacqueline. I know you are not a lawyer, but we are all, you know, in some way or somehow we’ll be ending up in court. So what would be your suggestion or recommendations on spyware? If you could spend like one minute to respond to that.


Jacqueline Pegato: Sure. Thank you for the question. We have in our project, Data Privacy Brazil, we are developing some key recommendations on the research. And just to also clarify that I spoke about the Brazilian case, but we also, this is not an exclusive view of Brazil. We have cases in Colombia. We have a very important precedence in the US with the Pegasus and the ruling ordering to damages to META. But yeah, let me say some recommendations we are working on in terms of, I think we have four recommendations, key recommendations, I would say. The first one is to develop technical and legal standards for the digital chain of custody, including metadata preservation, access logs, authentication layers and independent audit trails. The second one would be trained judges and legal professionals in cybersecurity, digital forensics and especially data protection. The digital knowledge gap within our courts is a risk we can no longer afford in this scenario. The third recommendation would be to equip courts with cybersecurity protocols and contingency plans to strengthen institutional resilience against cyber threats, including unauthorized access to judicial data. And last but not least, of course, promote multistakeholder dialogue among courts, technologists, civil society and policymakers. I think judicial systems must evolve collaboratively to meet these realities we are talking about here. So thank you.


Naza Nicholas: Thank you so much. Marin, can you share one way courts could be more responsive to survivors of online violence? If you could put that in one minute, I think it would be very short and clear.


Marin Ashraf: Yeah, sure. I’ll try. So I think one of the important ways, as I said in my previous intervention, that it’s very important to understand the online public sphere itself and how the unique vulnerabilities that people face in the online sphere, especially also the role of the platforms and the algorithms in amplifying the harms. Secondly, it’s really important for the courts to uphold the right to privacy of the survivors in cases of OHV.


Eliamani Isaya Laltaika: using a computer system while in a plane or a ship registered in Tanzania, the law will catch up with you. So to be able to exchange data and get, you know, extra territorial expertise, one must be able to collaborate. And I’m seeing a positive development within the East African community where there are initiatives to empower judges and the legal fraternity to borrow a phrase from my panelist here on how to really get into the 21st century well-equipped with protecting citizenry. Thank you.


Naza Nicholas: Thank you, Dr. Judge. Uma, I know from the legal perspective serving the civil society, what do you think should be the legal safeguards that are of most urgent to protect, you know, defendants in digital crime cases?


Umar Khan: Very important question. There’s a principle the honorable judge will know that innocent until proven guilty. So a person is innocent until he is proven guilty. So there are certain challenges which are often faced by the defendants in the digital crimes. And I will just mention a few of them. One of them is like updating outdated law legislations because every day new thing is happening in the digital world. So with this, a law passed in 2015 cannot be brought in 2025 because digital crimes are…


Participant: case in Ecuador and would like to know your perspective. I don’t know if it will be contempt of court, but Ola Bini is a digital rights defender, has been facing a political judicial case in Ecuador since 2019. His case illustrates the risks of misusing digital evidence in judicial proceedings. In his trial, the prosecutor’s office used a simple photograph which showed the connection from an unverified user to an IP to support an alleged attempt to gain unauthorized access to a state telecommunication system. Marta says, commonsensically, that a single photograph is not in itself evidence of a digital crime. So Marta wants to know, in addition to the need for digital forensics, what other protocols must be in place to ensure that alleged digital evidence


Peter Swire: Okay. There’s many good possible first steps, but one thing is to have backup so that you don’t lose the court records. And that’s what I said about ransomware. So now, even in a resource-constrained place, digital storage is relatively inexpensive. And if you lose all the records because of a cyber attack, now you have a very hard time doing your judging. But if at least you have the records saved, then you can start again tomorrow and have a good chance to have a fair trial.


Eliamani Isaya Laltaika: Thank you very much. To share a practical experience from my country, we have our slot in the data center. So the judiciary does not operate in silo, like it has its own way of preserving. No, we are part of the government. So the standard of security that applies to records of parliament or state house applies to the court as well. So we do not foresee anyone easily targeting the judiciary and succeeding because you are targeting the heart of the government.


Naza Nicholas: Thank you, Judge. I saw a hand and then there’s another hand over here. If you can be on the mic. So do we have to take all the questions and then we’ll respond at once? Yes, as they come.


Adriana Castro: Yes. Hi, my name is Adriana Castro. I’m a professor at External University in Colombia. And I would like to raise an additional issue. A moment before the digital evidence, the contact information and notification. The Ibero-American data protection network composed of data protection authorities recently published an open letter to the companies accountable on data processing. It’s an open invitation directed to the companies which massively process


Eliamani Isaya Laltaika: The goal of cyber security is to ensure that the cyberspace is safe for all users, including people with disabilities and even children. With progressive data protection laws, there are ways that judges are instructed to ensure in-camera hearing.


Peter Swire: So in the United States, we have a law about medical privacy that’s called HIPAA, which I worked on when we created it. And it has a mechanism for what they call qualified protective orders. So one possibility is only the judge looks at the evidence, the sensitive evidence in camera, just the judge. Another possibility is they close the courtroom just for the medical information so that both parties see it. And both of those are allowed by the judge to create a protective order around this very sensitive information. So there’s a model for that that you can find easily online if you look for it for qualified protective orders.


Naza Nicholas: Thank you so much. I see, I don’t see, is there any question from online? Okay. Thank you so much. And now I go to, is there anybody? There was a lady who asked the question, I think.


Eliamani Isaya Laltaika: That was not responded. I just wanted to say a sentence or two from the lady, a professor from Colombia. There are current ongoing UN mechanisms to ensure that country laws are not too restrictive. So there are diplomatic processes to collaborate and ensure that data flows easily for purposes of conviction and adducing evidence. At the moment we are


Naza Nicholas: Eliamani, today we have participated in this judging in the digital era, cyber security and digital evidence. If you look at the way you have interacted with the audience, the questions that they have brought to the panelists, what would be your parting shot today?


Jacqueline Pegato: I’m going back to Brazil because it’s the context that I know. In Brazil we have the right to data protection as a constitutional right. This happened in 2022 and I think it was a great victory. We also have a comprehensive general data protection law, we call LGPD, that is in place also with an independent data protection authority. But we still don’t have a criminal data protection framework. So I think that’s an important gap to address. And I think all of the questions and discussions that we raised here today could share this concern of having this framework to fight against private related crimes and also these weaknesses of the judicial system, their ability to handle cyber crime effectively. So this gap in Brazil has been identified and debated in some legislative reform efforts, including the reform of the Brazilian Code of Criminal Procedure and some specific advocacy efforts, but no law has been approved so far. So I think that’s one development that we still have to fight for.


Naza Nicholas: Thank you, Dr. Jacqueline. Marin, the same question goes to you also. Just in one minute, if you can give your parting shot, what would you like to see in the future?


Marin Ashraf: Speaking from the perspective of the area of work that I work in that is online gender-based violence, I think court should be safe spaces for women and other survivors of online violence that are seeking justice. It’s very important that criminal justice system undertake ecosystem level changes in sensitization and inclusive policies and equip judges and law enforcement to deal with online violence cases in a sensitive and rights respecting manner. At the same time, it’s also important to update our laws to reflect the current realities and to to even to recognize newer forms of violence like gender trolling or gender-based hate speech, doxxing. So not all jurisdictions have yet recognized. So the changes at the legislature levels are also crucial for the judiciary to work effectively in this case. Thank you.


Naza Nicholas: Professor Peter


Peter Swire: Yes, thank you very much and thank you for being able to participate remotely in this very well organized session today. What I want to come back to the data protection point that’s come up a couple of times. So the cross-border data forum we created seven years ago, and it has stated goals in the website. One of them is that government should get access to data when there’s a legitimate government need. You know, there’s an actual crime. Maybe there’s a warrant with a judicial court order. On the other hand, we also have to have privacy and data protection happen when these requests come in. So imagine if you’re a company or imagine if you’re a court and there’s a request from some other country and you don’t know the practices there. Maybe it’s a request for data that’s really to stop political dissent. Maybe it’s illegitimate and not protecting data protection rules. And so our work has been for seven years to try to figure out how do we have correct access when it’s criminal and there’s the right showing. How to have privacy and data protection when those rights need to be upheld as part of the system. And how can we make it workable so that the people who hold the data know what their responsibility is.


Naza Nicholas: What will be your parting shot today?


Eliamani Isaya Laltaika: My parting shot actually is just to say thank you to the many people, especially the civil society, fraternity, who are doing a fantastic job to ensure that there is capacity building for judges. My colleagues here from India and from Brazil, even from Pakistan, have highlighted that there is this knowledge gap. It’s true that some of us are not aware of some of these developments in the cyberspace. You can find a court that is still in its physical form, and if you ask someone about digital evidence, that is the hardest question you can ask. But we can generally narrow the gap by ensuring that we invite judges to some of these fora. We designate programs for capacity building, and many of us will attend. We will not stay in courts only to wait to convict someone wrongfully because we don’t know the law or we don’t know just a bit of science. So thank you very much, Dr. Naza. You have been a trailblazer in Tanzania. That’s how you got me out of my chamber to travel with you many places to try and learn, and I can assure you that what you are doing is making a difference, not only in Tanzania but across the continent and some other parts of the world. Thank you.


Umar Khan: Thank you so much, Dr. Nazer. And I believe this is for the second time I’m sitting with the people from the Sambi ground or the relevant background towards the legal or judiciary track. So I believe it should be continued by the end of the day. Crimes are happening, and it is the courts, the prosecutors, the lawyers, and the agencies who are going to handle these cases. And as mentioned by the Honorable Judge, High Court Tanzania, that the capacity building of the judges are very important. They should not just sit in the courts. It is very important. And I’m happy that from the last time when we sit in the Saudi, when we go back home, we just tried to have a training for the judicial academy of my province. Unfortunately, that didn’t happen because of the timing in the Ramadan. But I’m hopeful that this time when I go back home, we are having some judges to be trained, inshallah, and we will be looking into your support. Thank you once again.


Naza Nicholas: Thank you so much. Is the professor from Columbia still around? If you have one minute for your parting shot. If you have anything to say, one minute.


Adriana Castro: I would say just that there are still a lot of challenges in Columbia. We have the main challenge of notification. I mentioned something about data protection investigations, but also in human rights procedures, we have a specific issue on notification. So that’s one of the main capacity building areas that we will look forward. Thank you very much.


Naza Nicholas: Professor, I think I’ll keep in touch. Thank you, ladies and gentlemen, for attending our session. We have come to the end of our session. Thank you so much for your contribution, and thank you so much for listening. And one of our goals of our session is to bridge the divide between the multi-stakeholder and the judiciary, to make sure that we inform and transform the judiciary to become the better institution and get the judiciary out of their silos so we can make justice better for every single one of us. Thank you so much. So if we can come for a picture, for a photo. Thank you so much. Thank you so much.


E

Eliamani Isaya Laltaika

Speech speed

117 words per minute

Speech length

1116 words

Speech time

569 seconds

Courts must verify five key considerations for digital evidence: relevance, authenticity, integrity of system, chain of custody, and statutory compliance

Explanation

Judge Laltaika outlined the standard framework courts use to evaluate digital evidence, emphasizing that these considerations apply regardless of whether evidence originates from domestic or foreign jurisdictions. He stressed that evidence must pass through this systematic evaluation process to be admissible in court.


Evidence

Examples provided include verifying a video showing someone stabbing with a knife is authentic and not a curated cartoon, and ensuring proper chain of custody tracking how many hands the evidence passed through before reaching court


Major discussion point

Digital Evidence Authentication and Chain of Custody


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Peter Swire
– Jacqueline Pegato
– Participant

Agreed on

Importance of proper digital evidence authentication and chain of custody


Digital evidence assessment follows the same principles regardless of jurisdiction, with no discrimination between domestic and foreign evidence

Explanation

The judge argued that courts apply the same evidentiary standards whether digital evidence comes from within their jurisdiction or from another country. The key is that evidence must pass the established legal tests for admissibility rather than its geographic origin.


Evidence

Judge explained that every country has its own legal system and precedents, but if evidence passes the required process, there is no discrimination based on jurisdiction


Major discussion point

Cross-Border Digital Evidence and Legal Harmonization


Topics

Legal and regulatory | Jurisdiction


International collaboration mechanisms are necessary for data exchange and extraterritorial expertise in cybercrime cases

Explanation

Judge Laltaika emphasized that cybercrime often crosses borders, requiring courts to collaborate internationally to obtain evidence and expertise. He noted positive developments in regional cooperation, particularly within the East African community.


Evidence

Mentioned initiatives within the East African community to empower judges and legal fraternity with 21st century tools for protecting citizenry


Major discussion point

Cross-Border Digital Evidence and Legal Harmonization


Topics

Legal and regulatory | Cybersecurity


Many courts still operate in physical form without understanding digital evidence, creating risks of wrongful convictions

Explanation

The judge acknowledged a significant knowledge gap in the judiciary regarding digital evidence and cyberspace developments. He warned that judges lacking this knowledge could make incorrect decisions that result in wrongful convictions.


Evidence

Judge stated that asking some courts about digital evidence would be the hardest question, and emphasized the risk of convicting someone wrongfully due to lack of knowledge


Major discussion point

Judicial Capacity Building and Training Gaps


Topics

Legal and regulatory | Capacity development


Agreed with

– Jacqueline Pegato
– Umar Khan

Agreed on

Need for judicial capacity building and training in digital technologies


Capacity building programs should invite judges to forums and designate specific training programs

Explanation

Judge Laltaika advocated for proactive judicial education through specialized forums and training programs. He emphasized that judges should not remain isolated in their chambers but should actively seek to learn about technological developments affecting their work.


Evidence

Judge thanked civil society for capacity building efforts and mentioned his own participation in various learning opportunities organized by Dr. Naza Nicholas


Major discussion point

Judicial Capacity Building and Training Gaps


Topics

Capacity development | Legal and regulatory


Agreed with

– Jacqueline Pegato
– Umar Khan

Agreed on

Need for judicial capacity building and training in digital technologies


Courts can leverage government data centers for security standards rather than operating in isolation

Explanation

The judge explained that in Tanzania, the judiciary doesn’t operate independently for data security but is integrated with government infrastructure. This approach provides better security standards by applying the same protections used for other government institutions.


Evidence

Judge mentioned that Tanzania’s judiciary has a slot in the government data center, with security standards that apply to parliament and state house also applying to courts


Major discussion point

Cybersecurity Infrastructure for Courts


Topics

Infrastructure | Cybersecurity


Disagreed with

– Peter Swire

Disagreed on

Approach to cybersecurity infrastructure for courts


P

Peter Swire

Speech speed

193 words per minute

Speech length

1664 words

Speech time

514 seconds

Authentication requires confidence in dealing with the right parties, with two-factor authentication being more secure than password-based systems

Explanation

Professor Swire emphasized that verifying the identity of parties providing evidence is crucial, as passwords can be easily compromised. He recommended two-factor authentication as a more reliable method for ensuring authentic communication in legal proceedings.


Evidence

Explained two-factor authentication process where users log in with password then receive and send back a code, making it much harder to fake than simple password systems


Major discussion point

Digital Evidence Authentication and Chain of Custody


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Eliamani Isaya Laltaika
– Jacqueline Pegato
– Participant

Agreed on

Importance of proper digital evidence authentication and chain of custody


Digital signatures using mathematical hash operations can prove document integrity from sender to receiver

Explanation

Professor Swire explained how digital signatures work through mathematical hash functions that create unique numbers for documents. If even one word is changed, the hash changes, providing proof that the document received is identical to what was sent.


Evidence

Described the technical process where Alice sends a document with a mathematical hash operation, and any change in the document changes the unique number, proving integrity from Alice to Bob (such as court system)


Major discussion point

Digital Evidence Authentication and Chain of Custody


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Eliamani Isaya Laltaika
– Jacqueline Pegato
– Participant

Agreed on

Importance of proper digital evidence authentication and chain of custody


AI systems can have hallucinations and generate fake citations, requiring judges to double-check sources and citations

Explanation

Professor Swire warned about AI’s tendency to create false information, particularly fake legal citations, because AI uses predictive rather than definitive technology. He emphasized the need for judges to verify AI-generated content before relying on it.


Evidence

Cited US law cases where lawyers used AI systems that generated fake case citations, and explained that large language models use predictive technology that can make up citations


Major discussion point

AI and Automated Tools in Legal Proceedings


Topics

Legal and regulatory | Cybersecurity


Courts need to distinguish between predictive AI technology and definitive evidence when assessing reliability

Explanation

Professor Swire highlighted the fundamental difference between AI’s predictive capabilities and the definitive evidence required in legal proceedings. This distinction is crucial for judges when evaluating AI-generated or AI-assisted evidence.


Evidence

Explained that large language models use predictive technology, not definite technology, when generating evidence


Major discussion point

AI and Automated Tools in Legal Proceedings


Topics

Legal and regulatory | Cybersecurity


Judges should sample-check citations when there are too many to verify individually

Explanation

Professor Swire provided practical advice for handling large volumes of potentially AI-generated citations. When complete verification isn’t feasible, statistical sampling can help identify patterns of fake citations.


Evidence

Suggested trying 10, 50, or whatever number is manageable to check for fake citations when complete verification isn’t possible


Major discussion point

AI and Automated Tools in Legal Proceedings


Topics

Legal and regulatory | Cybersecurity


Courts need backup systems and offline storage to protect against ransomware attacks that could lock up judicial files

Explanation

Professor Swire identified ransomware as a major threat to court systems, where bad actors can lock up all court files. He emphasized that having offline backups is the most important protection, though backup systems themselves must also be secured.


Evidence

Mentioned seeing state and local governments and court systems in the US hit with ransomware attacks, and emphasized the importance of having backups to restore everything from yesterday


Major discussion point

Cybersecurity Infrastructure for Courts


Topics

Cybersecurity | Infrastructure


Disagreed with

– Eliamani Isaya Laltaika

Disagreed on

Approach to cybersecurity infrastructure for courts


Courts need mechanisms like qualified protective orders for handling sensitive information, including in-camera hearings

Explanation

Professor Swire described legal mechanisms from US medical privacy law (HIPAA) that can be adapted for protecting sensitive digital evidence. These include judges reviewing evidence privately or closing courtrooms for sensitive information while ensuring both parties can see it.


Evidence

Referenced HIPAA qualified protective orders that allow either only the judge to see sensitive evidence in camera, or closing the courtroom just for medical information so both parties can see it


Major discussion point

Data Protection and Privacy in Legal Proceedings


Topics

Human rights | Legal and regulatory


Agreed with

– Jacqueline Pegato
– Umar Khan

Agreed on

Need for balance between surveillance/security and privacy rights


J

Jacqueline Pegato

Speech speed

123 words per minute

Speech length

830 words

Speech time

404 seconds

Spyware tools are increasingly used beyond legitimate law enforcement, eroding democratic institutions and requiring strict judicial oversight

Explanation

Jacqueline argued that spyware, while presented as legitimate national security tools, is being misused to target activists and even Supreme Court justices. This misuse occurs outside transparent legal frameworks, preventing courts from protecting fundamental rights.


Evidence

Cited the First Mile spyware case where intelligence officials surveilled targets ranging from activists to Supreme Court justices, demonstrating surveillance outside transparent legal frameworks


Major discussion point

State Surveillance and Privacy Rights Balance


Topics

Human rights | Cybersecurity


Agreed with

– Umar Khan
– Peter Swire

Agreed on

Need for balance between surveillance/security and privacy rights


Disagreed with

– Umar Khan

Disagreed on

Scope of spyware regulation


Courts should establish strict criteria for spyware use, including prior judicial authorization and constitutional interpretation updated for contemporary intrusion standards

Explanation

Jacqueline outlined specific legal requirements that should govern spyware use, emphasizing that these tools go far beyond traditional investigative methods. She argued for constitutional protections to be updated to reflect modern intrusion capabilities.


Evidence

Referenced a pending Brazilian Supreme Court case challenging spyware use as unconstitutional, and detailed specific requirements including prior judicial authorization, chain of custody mechanisms, and individualization of surveillance subjects


Major discussion point

State Surveillance and Privacy Rights Balance


Topics

Human rights | Legal and regulatory


Agreed with

– Umar Khan
– Peter Swire

Agreed on

Need for balance between surveillance/security and privacy rights


Technical and legal standards must be developed for digital chain of custody, including metadata preservation and independent audit trails

Explanation

Jacqueline emphasized the need for comprehensive standards that ensure digital evidence integrity throughout the legal process. These standards should include technical safeguards and independent verification mechanisms.


Evidence

Listed specific requirements including metadata preservation, access logs, authentication layers and independent audit trails as part of comprehensive digital chain of custody standards


Major discussion point

Digital Evidence Authentication and Chain of Custody


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Eliamani Isaya Laltaika
– Peter Swire
– Participant

Agreed on

Importance of proper digital evidence authentication and chain of custody


Judges need continuous specialized training in cybersecurity, digital forensics, and data protection to address knowledge gaps

Explanation

Jacqueline identified the digital knowledge gap within courts as a significant risk that can no longer be ignored. She emphasized that judicial systems must evolve collaboratively with technical experts to meet current realities.


Evidence

Stated that the digital knowledge gap within courts is a risk that can no longer be afforded, and emphasized need for multistakeholder dialogue among courts, technologists, civil society and policymakers


Major discussion point

Judicial Capacity Building and Training Gaps


Topics

Capacity development | Legal and regulatory


Agreed with

– Eliamani Isaya Laltaika
– Umar Khan

Agreed on

Need for judicial capacity building and training in digital technologies


Judicial systems should have cybersecurity protocols and contingency plans to strengthen institutional resilience

Explanation

Jacqueline argued that courts need proactive cybersecurity measures to protect against cyber threats, including unauthorized access to judicial data. This requires systematic planning and implementation of security protocols.


Major discussion point

Cybersecurity Infrastructure for Courts


Topics

Cybersecurity | Infrastructure


Brazil has constitutional right to data protection and comprehensive LGPD law but lacks criminal data protection framework

Explanation

Jacqueline highlighted a significant gap in Brazil’s legal framework where data protection is constitutionally protected and regulated, but criminal enforcement mechanisms are missing. This gap affects the judicial system’s ability to handle cybercrime effectively.


Evidence

Mentioned that Brazil’s constitutional right to data protection was established in 2022, and that legislative reform efforts including reform of the Brazilian Code of Criminal Procedure have been debated but no law approved


Major discussion point

Data Protection and Privacy in Legal Proceedings


Topics

Human rights | Legal and regulatory


M

Marin Ashraf

Speech speed

152 words per minute

Speech length

758 words

Speech time

297 seconds

Digital evidence in online gender-based violence cases often fails to meet burden of proof due to authentication certificate difficulties and lack of platform cooperation

Explanation

Marin explained that online gender-based violence cases frequently fail in court because of technical barriers to proving digital evidence. The high burden of proof required in criminal cases becomes difficult to meet when authentication certificates are hard to obtain or platforms don’t cooperate with law enforcement.


Evidence

Cited research study showing courts dismiss digital evidence due to lack of authentication certificates, and noted difficulties when complainants don’t have access to computer resources or don’t know how to obtain certificates


Major discussion point

Online Gender-Based Violence and Platform Accountability


Topics

Human rights | Legal and regulatory


Courts must understand online vulnerabilities and the role of platforms and algorithms in amplifying harms against survivors

Explanation

Marin argued that effective judicial response to online gender-based violence requires understanding how digital platforms and their algorithms can amplify harm. Courts need to recognize the unique nature of online spaces and their impact on survivors.


Major discussion point

Online Gender-Based Violence and Platform Accountability


Topics

Human rights | Legal and regulatory


Criminal justice systems need ecosystem-level changes in sensitization and inclusive policies for handling online violence cases

Explanation

Marin emphasized that addressing online gender-based violence requires comprehensive reform of criminal justice systems, not just individual training. This includes making courts safe spaces for survivors and updating laws to recognize new forms of violence.


Evidence

Mentioned need to recognize newer forms of violence like gender trolling, gender-based hate speech, and doxxing that not all jurisdictions have yet recognized


Major discussion point

Online Gender-Based Violence and Platform Accountability


Topics

Human rights | Legal and regulatory


U

Umar Khan

Speech speed

139 words per minute

Speech length

875 words

Speech time

376 seconds

Surveillance is necessary for cybersecurity but must balance legality, proportionality, and transparency while protecting constitutional rights to privacy and dignity

Explanation

Umar Khan argued that state surveillance in the digital world is essential for monitoring and preventing cybercrime, but it must be conducted within legal frameworks that respect fundamental rights. The key challenge is maintaining this balance while ensuring effective law enforcement.


Evidence

Emphasized principles of legality, professionality, and transparency in surveillance, and mentioned constitutional rights to privacy and dignity as well as Universal Declaration of Human Rights


Major discussion point

State Surveillance and Privacy Rights Balance


Topics

Human rights | Cybersecurity


Agreed with

– Jacqueline Pegato
– Peter Swire

Agreed on

Need for balance between surveillance/security and privacy rights


Disagreed with

– Jacqueline Pegato

Disagreed on

Scope of spyware regulation


Legal frameworks need updating as outdated laws from 2015 cannot adequately address 2025 digital crimes

Explanation

Umar Khan highlighted the rapid pace of technological change that makes existing cybercrime laws obsolete. He emphasized that legal frameworks must continuously evolve to address new forms of digital crime and technological developments.


Evidence

Mentioned Pakistan’s Prevention of Electronics Crime Act from 2016 as an example of how laws become outdated, stating that a law passed in 2015 cannot be brought to 2025 because digital crimes are constantly evolving


Major discussion point

Judicial Capacity Building and Training Gaps


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Eliamani Isaya Laltaika
– Jacqueline Pegato

Agreed on

Need for judicial capacity building and training in digital technologies


P

Participant

Speech speed

123 words per minute

Speech length

128 words

Speech time

62 seconds

A simple photograph showing IP connection is insufficient evidence for digital crimes without proper forensic protocols

Explanation

A participant raised the case of Ola Bini in Ecuador, where prosecutors used only a photograph showing connection from an unverified user to an IP address as evidence of unauthorized access to state systems. The participant argued that such minimal evidence cannot constitute proof of digital crime.


Evidence

Referenced the Ola Bini case in Ecuador where a digital rights defender faced charges based on a single photograph showing IP connection, which the participant argued commonsensically cannot be evidence of digital crime by itself


Major discussion point

Digital Evidence Authentication and Chain of Custody


Topics

Human rights | Legal and regulatory


Agreed with

– Eliamani Isaya Laltaika
– Peter Swire
– Jacqueline Pegato

Agreed on

Importance of proper digital evidence authentication and chain of custody


A

Adriana Castro

Speech speed

118 words per minute

Speech length

127 words

Speech time

64 seconds

Notification and contact information procedures present ongoing challenges in data protection investigations

Explanation

Professor Castro highlighted specific procedural challenges in data protection cases, particularly around notifying parties and establishing proper contact information. These issues affect both data protection investigations and human rights procedures.


Evidence

Mentioned that the Ibero-American data protection network published an open letter to companies about data processing accountability, and noted specific notification challenges in Colombia


Major discussion point

Data Protection and Privacy in Legal Proceedings


Topics

Human rights | Legal and regulatory


N

Naza Nicholas

Speech speed

107 words per minute

Speech length

1389 words

Speech time

775 seconds

Legal harmonization across jurisdictions is needed to handle digital evidence uniformly and share best practices from diverse legal systems

Explanation

Dr. Nicholas emphasized the need for consistent approaches to digital evidence across different legal systems and jurisdictions. He advocated for sharing best practices between civil, common, and hybrid legal traditions to create more uniform standards.


Evidence

Mentioned the goal of building legal harmonization across jurisdictions and sharing best practices from diverse legal systems including civil, common, and hybrid traditions


Major discussion point

Cross-Border Digital Evidence and Legal Harmonization


Topics

Legal and regulatory | Capacity development


Agreements

Agreement points

Need for judicial capacity building and training in digital technologies

Speakers

– Eliamani Isaya Laltaika
– Jacqueline Pegato
– Umar Khan

Arguments

Many courts still operate in physical form without understanding digital evidence, creating risks of wrongful convictions


Capacity building programs should invite judges to forums and designate specific training programs


Judges need continuous specialized training in cybersecurity, digital forensics, and data protection to address knowledge gaps


Legal frameworks need updating as outdated laws from 2015 cannot adequately address 2025 digital crimes


Summary

All speakers agreed that there is a significant knowledge gap in the judiciary regarding digital evidence and emerging technologies, requiring systematic capacity building programs and continuous training to prevent wrongful decisions and keep pace with technological developments.


Topics

Capacity development | Legal and regulatory


Importance of proper digital evidence authentication and chain of custody

Speakers

– Eliamani Isaya Laltaika
– Peter Swire
– Jacqueline Pegato
– Participant

Arguments

Courts must verify five key considerations for digital evidence: relevance, authenticity, integrity of system, chain of custody, and statutory compliance


Authentication requires confidence in dealing with the right parties, with two-factor authentication being more secure than password-based systems


Digital signatures using mathematical hash operations can prove document integrity from sender to receiver


Technical and legal standards must be developed for digital chain of custody, including metadata preservation and independent audit trails


A simple photograph showing IP connection is insufficient evidence for digital crimes without proper forensic protocols


Summary

Speakers unanimously emphasized that digital evidence requires rigorous authentication processes, proper chain of custody documentation, and technical standards to ensure reliability and admissibility in court proceedings.


Topics

Legal and regulatory | Cybersecurity


Need for balance between surveillance/security and privacy rights

Speakers

– Jacqueline Pegato
– Umar Khan
– Peter Swire

Arguments

Spyware tools are increasingly used beyond legitimate law enforcement, eroding democratic institutions and requiring strict judicial oversight


Courts should establish strict criteria for spyware use, including prior judicial authorization and constitutional interpretation updated for contemporary intrusion standards


Surveillance is necessary for cybersecurity but must balance legality, proportionality, and transparency while protecting constitutional rights to privacy and dignity


Courts need mechanisms like qualified protective orders for handling sensitive information, including in-camera hearings


Summary

Speakers agreed that while surveillance and security measures are necessary for cybersecurity, they must be balanced with privacy rights through proper legal frameworks, judicial oversight, and constitutional protections.


Topics

Human rights | Cybersecurity


Similar viewpoints

Both speakers emphasized that digital evidence evaluation should follow consistent principles regardless of origin, and that courts need robust cybersecurity infrastructure including backup systems to protect against cyber threats.

Speakers

– Eliamani Isaya Laltaika
– Peter Swire

Arguments

Digital evidence assessment follows the same principles regardless of jurisdiction, with no discrimination between domestic and foreign evidence


Courts need backup systems and offline storage to protect against ransomware attacks that could lock up judicial files


Courts can leverage government data centers for security standards rather than operating in isolation


Topics

Legal and regulatory | Cybersecurity | Infrastructure


Both speakers highlighted gaps in legal frameworks and the need for comprehensive reforms to address digital rights violations, particularly emphasizing the challenges faced by vulnerable groups in accessing justice through digital evidence.

Speakers

– Jacqueline Pegato
– Marin Ashraf

Arguments

Brazil has constitutional right to data protection and comprehensive LGPD law but lacks criminal data protection framework


Digital evidence in online gender-based violence cases often fails to meet burden of proof due to authentication certificate difficulties and lack of platform cooperation


Criminal justice systems need ecosystem-level changes in sensitization and inclusive policies for handling online violence cases


Topics

Human rights | Legal and regulatory


Both speakers emphasized the risks posed by AI technology in legal proceedings and the need for courts to develop systematic approaches to verify AI-generated content while building institutional resilience against cyber threats.

Speakers

– Peter Swire
– Jacqueline Pegato

Arguments

AI systems can have hallucinations and generate fake citations, requiring judges to double-check sources and citations


Courts should distinguish between predictive AI technology and definitive evidence when assessing reliability


Judges should sample-check citations when there are too many to verify individually


Judicial systems should have cybersecurity protocols and contingency plans to strengthen institutional resilience


Topics

Legal and regulatory | Cybersecurity


Unexpected consensus

Cross-border cooperation and legal harmonization

Speakers

– Eliamani Isaya Laltaika
– Naza Nicholas
– Adriana Castro

Arguments

International collaboration mechanisms are necessary for data exchange and extraterritorial expertise in cybercrime cases


Legal harmonization across jurisdictions is needed to handle digital evidence uniformly and share best practices from diverse legal systems


Notification and contact information procedures present ongoing challenges in data protection investigations


Explanation

Despite representing different legal systems (Tanzania’s common law, Brazil’s civil law, and Colombia’s hybrid system), speakers showed unexpected consensus on the need for international cooperation and harmonized approaches to digital evidence, suggesting that technological challenges transcend traditional legal system boundaries.


Topics

Legal and regulatory | Jurisdiction


Multi-stakeholder approach to judicial reform

Speakers

– Eliamani Isaya Laltaika
– Jacqueline Pegato
– Marin Ashraf

Arguments

Capacity building programs should invite judges to forums and designate specific training programs


Judges need continuous specialized training in cybersecurity, digital forensics, and data protection to address knowledge gaps


Courts must understand online vulnerabilities and the role of platforms and algorithms in amplifying harms against survivors


Explanation

Unexpectedly, a sitting judge (Laltaika) showed strong alignment with civil society advocates (Pegato and Ashraf) on the need for collaborative, multi-stakeholder approaches to judicial education and reform, breaking down traditional institutional silos.


Topics

Capacity development | Human rights | Legal and regulatory


Overall assessment

Summary

The discussion revealed strong consensus across all speakers on three main areas: the critical need for judicial capacity building in digital technologies, the importance of rigorous digital evidence authentication processes, and the necessity of balancing security measures with privacy rights. Speakers also agreed on the need for international cooperation and multi-stakeholder approaches to address digital justice challenges.


Consensus level

High level of consensus with significant implications for digital justice reform. The agreement between judicial officials and civil society advocates suggests a shared understanding of the challenges and potential for collaborative solutions. This consensus indicates readiness for systematic reforms in judicial systems globally, including harmonized standards for digital evidence, comprehensive training programs, and balanced approaches to surveillance and privacy rights. The unexpected alignment between different legal traditions and stakeholder groups suggests that technological challenges are creating common ground for judicial reform across diverse legal systems.


Differences

Different viewpoints

Approach to cybersecurity infrastructure for courts

Speakers

– Peter Swire
– Eliamani Isaya Laltaika

Arguments

Courts need backup systems and offline storage to protect against ransomware attacks that could lock up judicial files


Courts can leverage government data centers for security standards rather than operating in isolation


Summary

Professor Swire advocates for courts to have independent backup systems and offline storage as protection against ransomware, while Judge Laltaika argues that courts should integrate with government data centers rather than operate independently for security


Topics

Cybersecurity | Infrastructure


Scope of spyware regulation

Speakers

– Jacqueline Pegato
– Umar Khan

Arguments

Spyware tools are increasingly used beyond legitimate law enforcement, eroding democratic institutions and requiring strict judicial oversight


Surveillance is necessary for cybersecurity but must balance legality, proportionality, and transparency while protecting constitutional rights to privacy and dignity


Summary

Jacqueline takes a more restrictive stance on spyware, arguing it erodes democratic institutions and should face strict judicial oversight, while Umar Khan sees surveillance as necessary for cybersecurity with appropriate balancing of rights


Topics

Human rights | Cybersecurity


Unexpected differences

Individual vs. institutional approach to judicial cybersecurity

Speakers

– Peter Swire
– Eliamani Isaya Laltaika

Arguments

Courts need backup systems and offline storage to protect against ransomware attacks that could lock up judicial files


Courts can leverage government data centers for security standards rather than operating in isolation


Explanation

This disagreement is unexpected because both speakers are cybersecurity experts, yet they have fundamentally different philosophies about whether courts should have independent security infrastructure or integrate with government systems. This reflects deeper questions about judicial independence versus efficiency


Topics

Cybersecurity | Infrastructure


Overall assessment

Summary

The discussion showed remarkable consensus on the need for judicial modernization, capacity building, and better digital evidence handling, with disagreements mainly on implementation approaches rather than fundamental goals


Disagreement level

Low to moderate disagreement level. Most speakers agreed on core principles but differed on specific methodologies and priorities. The main tensions were between individual versus institutional approaches to security, and between restrictive versus balanced approaches to surveillance. These disagreements reflect practical implementation challenges rather than fundamental philosophical differences, suggesting good potential for collaborative solutions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized that digital evidence evaluation should follow consistent principles regardless of origin, and that courts need robust cybersecurity infrastructure including backup systems to protect against cyber threats.

Speakers

– Eliamani Isaya Laltaika
– Peter Swire

Arguments

Digital evidence assessment follows the same principles regardless of jurisdiction, with no discrimination between domestic and foreign evidence


Courts need backup systems and offline storage to protect against ransomware attacks that could lock up judicial files


Courts can leverage government data centers for security standards rather than operating in isolation


Topics

Legal and regulatory | Cybersecurity | Infrastructure


Both speakers highlighted gaps in legal frameworks and the need for comprehensive reforms to address digital rights violations, particularly emphasizing the challenges faced by vulnerable groups in accessing justice through digital evidence.

Speakers

– Jacqueline Pegato
– Marin Ashraf

Arguments

Brazil has constitutional right to data protection and comprehensive LGPD law but lacks criminal data protection framework


Digital evidence in online gender-based violence cases often fails to meet burden of proof due to authentication certificate difficulties and lack of platform cooperation


Criminal justice systems need ecosystem-level changes in sensitization and inclusive policies for handling online violence cases


Topics

Human rights | Legal and regulatory


Both speakers emphasized the risks posed by AI technology in legal proceedings and the need for courts to develop systematic approaches to verify AI-generated content while building institutional resilience against cyber threats.

Speakers

– Peter Swire
– Jacqueline Pegato

Arguments

AI systems can have hallucinations and generate fake citations, requiring judges to double-check sources and citations


Courts should distinguish between predictive AI technology and definitive evidence when assessing reliability


Judges should sample-check citations when there are too many to verify individually


Judicial systems should have cybersecurity protocols and contingency plans to strengthen institutional resilience


Topics

Legal and regulatory | Cybersecurity


Takeaways

Key takeaways

Courts must establish five key considerations for digital evidence admissibility: relevance, authenticity, integrity of system, chain of custody, and statutory compliance, with no discrimination between domestic and foreign evidence


AI-generated evidence poses new challenges as AI systems can hallucinate and create fake citations, requiring judges to double-check sources and sample-verify citations when volume is too large


Digital authentication requires two-factor verification and digital signatures using mathematical hash operations to ensure document integrity from sender to receiver


State surveillance tools like spyware require strict judicial oversight with prior authorization, constitutional interpretation updated for contemporary intrusion standards, and balance between security needs and privacy rights


Online gender-based violence cases face unique challenges with digital evidence authentication and platform cooperation, requiring courts to understand online vulnerabilities and algorithmic harm amplification


Cross-border digital evidence cooperation requires legal harmonization across jurisdictions and international collaboration mechanisms for data exchange in cybercrime cases


Judicial capacity building is critical as many judges lack knowledge of digital evidence, cybersecurity, and data protection, creating risks of wrongful convictions


Courts need robust cybersecurity infrastructure including backup systems, offline storage, and contingency plans to protect against ransomware attacks and maintain judicial operations


Resolutions and action items

Continue bridging the divide between multi-stakeholder community and judiciary through IGF sessions to transform judicial institutions


Develop technical and legal standards for digital chain of custody including metadata preservation, access logs, authentication layers and independent audit trails


Establish training programs for judges in cybersecurity, digital forensics, and data protection through judicial academies and capacity building initiatives


Create multistakeholder dialogue platforms among courts, technologists, civil society and policymakers for collaborative judicial system evolution


Update legal frameworks to address contemporary digital crimes and recognize newer forms of online violence like gender trolling and doxxing


Implement cybersecurity protocols and contingency plans to strengthen institutional resilience against cyber threats


Advocate for comprehensive criminal data protection frameworks in jurisdictions lacking such legislation


Unresolved issues

How to effectively balance state surveillance needs for cybersecurity with individual privacy rights and fair trial guarantees in practice


Lack of platform cooperation in providing digital evidence for law enforcement requests, causing delays and evidence gaps


Notification and contact information procedures in cross-border data protection investigations remain challenging


Outdated legal frameworks cannot adequately address rapidly evolving digital crimes and technologies


Resource constraints in developing comprehensive backup systems and cybersecurity infrastructure for courts


Gaps in criminal data protection frameworks in various jurisdictions including Brazil


Difficulty obtaining authentication certificates for digital evidence, particularly when complainants lack access to computer resources


Privacy threats and chain of custody concerns when digital devices must be submitted to law enforcement


Suggested compromises

Use qualified protective orders allowing judges to review sensitive evidence in-camera or close courtrooms only for sensitive information while maintaining transparency for other proceedings


Implement sample-checking of citations when full verification is not feasible due to volume constraints


Leverage existing government data center infrastructure for judicial cybersecurity rather than courts operating in isolation


Establish strict criteria for spyware use analogous to existing regulations for other confidentiality breaches while allowing legitimate law enforcement applications


Develop progressive data protection laws with in-camera hearing provisions to balance transparency with privacy protection


Create mechanisms for legitimate government access to data with proper judicial oversight while maintaining privacy and data protection when rights need to be upheld


Thought provoking comments

Digital evidence is a newcomer in the development of law and judiciaries all over the world. It started in the late 70s in the U.S. Before that, only hard copies were used to prove something that has happened or not… each one of you is currently creating digital evidence or electronic evidence from the pictures you are taking, from your geolocation, from the voices you are sending over WhatsApp.

Speaker

Eliamani Isaya Laltaika


Reason

This comment was particularly insightful because it reframed digital evidence from an abstract legal concept to something personally relevant to every participant. By connecting everyday digital activities to evidence creation, Judge Laltaika made the technical discussion accessible and highlighted the ubiquity of digital footprints in modern life.


Impact

This observation shifted the discussion from theoretical legal principles to practical, personal implications. It helped establish the relevance of the topic for all participants and set the foundation for understanding why digital evidence standards matter for everyone, not just legal professionals.


We know that AI can have hallucinations. We’ve seen law cases in the United States where a lawyer just put in a question to the AI system and got back case citations that were not true. They made them up. Because AI and large language models use predictive technology, not definite technology.

Speaker

Peter Swire


Reason

This comment introduced a critical distinction between predictive and definitive technology that fundamentally challenges how courts might approach AI-generated evidence. The concept of AI ‘hallucinations’ in legal contexts represents a new category of evidentiary risk that traditional legal frameworks weren’t designed to handle.


Impact

This insight prompted a deeper discussion about verification methods and introduced the need for new protocols like citation checking. It elevated the conversation from basic digital evidence authentication to the more complex challenge of AI-generated content, influencing subsequent discussions about verification standards.


When surveillance happens outside transparent legal frameworks, courts are sidelined and unable to guarantee fundamental rights they are tasked to protect… the nature of how they work and their affordances move way beyond traditional investigative methods, such as telephonic interceptions.

Speaker

Jacqueline Pegato


Reason

This comment was thought-provoking because it highlighted how advanced surveillance technologies can undermine judicial authority itself. By pointing out that spyware capabilities exceed traditional investigative methods, Pegato challenged the adequacy of existing legal frameworks and raised questions about institutional power balance.


Impact

This observation shifted the discussion from technical evidence handling to broader questions of judicial oversight and democratic accountability. It introduced the concept that technology might be outpacing the law’s ability to maintain checks and balances, prompting discussions about constitutional interpretation and regulatory gaps.


In many cases, even if the prosecution fails to even submit digital evidence, and the main barrier here comes from the lack of cooperation from the digital platforms, like social media platforms, in responding to requests from law enforcement agencies to provide information.

Speaker

Marin Ashraf


Reason

This comment revealed a critical gap between legal authority and practical enforcement in the digital age. It highlighted how private platforms’ cooperation (or lack thereof) can determine access to justice, particularly for vulnerable populations like survivors of online gender-based violence.


Impact

This insight broadened the discussion beyond technical evidence standards to include jurisdictional and corporate accountability issues. It introduced the concept that justice outcomes might depend on private companies’ policies and responsiveness, adding a new dimension to the conversation about digital justice.


There’s a principle the honorable judge will know that innocent until proven guilty… there are certain challenges which are often faced by the defendants in the digital crimes. And I will just mention a few of them. One of them is like updating outdated law legislations because every day new thing is happening in the digital world.

Speaker

Umar Khan


Reason

This comment was insightful because it highlighted the tension between fundamental legal principles and rapidly evolving technology. Khan pointed out that the speed of technological change creates a structural challenge for legal systems that rely on precedent and established procedures.


Impact

This observation prompted discussion about the need for adaptive legal frameworks and continuous judicial education. It shifted focus to the systemic challenge of keeping legal systems current with technological developments, influencing later discussions about capacity building and international cooperation.


So the judiciary does not operate in silo, like it has its own way of preserving. No, we are part of the government. So the standard of security that applies to records of parliament or state house applies to the court as well.

Speaker

Eliamani Isaya Laltaika


Reason

This comment challenged assumptions about judicial independence in cybersecurity contexts. It revealed how digital infrastructure requirements might blur traditional separations between branches of government, raising questions about both security benefits and potential vulnerabilities.


Impact

This insight prompted discussion about institutional cybersecurity strategies and highlighted practical considerations for court system protection. It added a governance dimension to the technical discussion and influenced thinking about collaborative approaches to judicial cybersecurity.


Overall assessment

These key comments collectively transformed the discussion from a technical examination of digital evidence procedures into a comprehensive exploration of how technology is reshaping fundamental aspects of justice systems. The most impactful insights connected abstract legal concepts to personal experience, revealed gaps between legal authority and practical enforcement, and highlighted how technological change challenges traditional legal frameworks. The comments created a progression from individual evidence handling to systemic questions about judicial authority, democratic oversight, and institutional adaptation. This evolution helped establish the session’s central theme: that digital transformation requires not just new technical skills, but fundamental reconsideration of how justice systems operate in the digital age. The discussion successfully bridged technical, legal, and policy perspectives, achieving the stated goal of bringing judiciary concerns into the broader internet governance conversation.


Follow-up questions

How do we protect institutions from cyber threats?

Speaker

Naza Nicholas


Explanation

This was identified as question number three in the introduction, highlighting the need for institutional cybersecurity measures to protect judicial systems


AI and justice – Can we trust machines, machine learning in evidence analysis or sentencing?

Speaker

Naza Nicholas


Explanation

This was identified as question number four, addressing the critical issue of AI reliability in judicial decision-making processes


Training gaps – Judges need continuous specialized training to keep up with emerging tech and things like AI

Speaker

Naza Nicholas


Explanation

This highlights the ongoing need for judicial education and capacity building in digital technologies


What other protocols must be in place to ensure that alleged digital evidence is properly authenticated beyond digital forensics?

Speaker

Participant (Marta)


Explanation

This question arose from the Ola Bini case in Ecuador, emphasizing the need for comprehensive protocols to prevent misuse of digital evidence


How to handle contact information and notification issues in cross-border data processing cases?

Speaker

Adriana Castro


Explanation

This addresses practical challenges in international cooperation for digital evidence collection and data protection compliance


How to protect sensitive personal data during judicial proceedings while ensuring fair trial rights?

Speaker

Adriana Castro


Explanation

This concerns balancing transparency in legal proceedings with privacy protection, especially for sensitive personal information


Development of criminal data protection framework in jurisdictions that lack comprehensive cyber crime laws

Speaker

Jacqueline Pegato


Explanation

This addresses legislative gaps in countries like Brazil that have data protection laws but lack specific criminal frameworks for data-related crimes


How to update outdated legislation to address rapidly evolving digital crimes?

Speaker

Umar Khan


Explanation

This highlights the challenge of keeping legal frameworks current with technological developments in cybercrime


How to establish judicial training programs and capacity building initiatives across different jurisdictions?

Speaker

Multiple speakers (Eliamani Laltaika, Umar Khan)


Explanation

This addresses the urgent need for systematic education of judges and legal professionals in digital evidence and cybersecurity matters


How to improve cooperation between digital platforms and law enforcement agencies for evidence collection in online gender-based violence cases?

Speaker

Marin Ashraf


Explanation

This addresses practical barriers in obtaining digital evidence from social media platforms and service providers for prosecution of online violence cases


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #90 Tower of Babel Chaos

Lightning Talk #90 Tower of Babel Chaos

Session at a glance

Summary

This discussion focused on exploring communication barriers in internet governance by experimenting with multilingual participation rather than defaulting to English as the common language. Virginia (Ginger) Paque, the session moderator, initiated the experiment by suspending the rule of English as the universal language, despite being the only native English speaker among nearly two dozen participants. The session began with various speakers defining internet governance in their preferred languages, with most referencing the WGIG definition that emphasizes multistakeholder collaboration between governments, the private sector, and civil society.

The core experiment involved participants communicating in their native languages to simulate a “Tower of Babel” scenario and observe what challenges and solutions emerged. Several participants reported feeling confused and migraine-inducing chaos when unable to understand others, with people naturally clustering into language groups with speakers they could comprehend. Some participants discovered they were the only representatives of their languages present, including speakers of Maltese, Samoan, Cape Verdean Creole, and Chichewa.

Ken Huang from Lingo AI presented artificial intelligence as a potential solution, explaining that AI can theoretically process all 7,000 human languages but defaults to English and major languages when data sets are insufficient. Other technological solutions discussed included real-time translation devices and the need for better multilingual datasets. Participants debated whether English should remain the de facto international language due to its practical effectiveness, or whether multiple language options should be provided to increase inclusivity.

The discussion revealed the political and cultural complexities of language choice, with examples from India and China where English serves as a neutral option among competing local languages. The experiment ultimately highlighted both the necessity of common communication methods and the potential for technological solutions to bridge linguistic divides in global internet governance discussions.

Keypoints

**Major Discussion Points:**

– **Language barriers in global internet governance discussions** – The session explored how requiring English as a common language excludes voices and creates communication challenges, despite most participants being non-native English speakers

– **Experimental multilingual communication approach** – Participants engaged in a “Tower of Babel” experiment where English was suspended as the required language, allowing people to communicate in their native languages to observe what happens

– **Technology solutions for language barriers** – Discussion of AI translation capabilities, with insights that AI thinks in mathematics rather than any specific language, and exploration of real-time translation tools and their current limitations

– **The politics and practicality of language choice** – Debate over whether English should remain the default international language due to its practical effectiveness versus concerns about linguistic imperialism and the need for more inclusive multilingual options

– **Isolation of minority language speakers** – Recognition that many participants were the sole representatives of their native languages (Maltese, Samoan, various Creoles, etc.), highlighting the challenge of meaningful participation in global forums

**Overall Purpose:**

The discussion aimed to examine and challenge the dominance of English in international internet governance forums by experimenting with multilingual communication approaches and exploring technological and policy solutions to make global discussions more linguistically inclusive.

**Overall Tone:**

The tone began as experimental and somewhat chaotic during the multilingual exercise, with participants reporting confusion and difficulty communicating. However, it evolved into a thoughtful, collaborative discussion as participants shared insights from the experiment. The atmosphere remained respectful and constructive throughout, with genuine curiosity about finding solutions to language barriers, though some participants ultimately concluded that English remains practically necessary for international communication.

Speakers

**Speakers from the provided list:**

– **Virginia (Ginger) Paque**: Diplo, Representative of CADE consortium, native English speaker, session moderator and organizer

– **Abed Kataya**: SMEX, CADE

– **Kenneth Harry Msiska**: Forus, CADE

– **Stephanie Borg Psaila**: Diplo, CADE, Maltese speaker

– **Karolina Iwańska**: ENCL, CADE

– **Slavica Karajicic**: Diplo, Cade

– **Bimsara Malshan**: Fusion, CADE

– **Ken Huang**: Co-founder of Lingo AI, from Singapore Internet Governance Forum

**Additional speakers:**

– **Audience members** (multiple unnamed participants who shared observations during the discussion, including speakers of various languages such as Chinese, German, Samoan, Hindi, Cape Verdean Creole, Chichewa, and others)

– **Una**: Participant from China, involved in language technology research and community language projects

Full session report

# Report: Experimental Discussion on Multilingual Communication Barriers in Internet Governance

## Executive Summary

This experimental session, moderated by Virginia (Ginger) Paque, explored communication barriers in internet governance forums through a unique “Tower of Babel” approach. The session brought together participants from diverse linguistic backgrounds to examine challenges and potential solutions for multilingual participation in global digital policy discussions. The experiment involved temporarily suspending English as the universal language to allow participants to experience firsthand the communication barriers that typically remain hidden when English dominance is accepted as standard practice.

The session revealed tensions between linguistic inclusivity and practical communication needs, highlighting the complex relationship between language, technology, and global governance. Participants discussed questions of fairness, efficiency, and the role of emerging technologies in bridging linguistic divides.

## Participant Overview and Experimental Context

The discussion featured participants from various linguistic backgrounds, with Virginia (Ginger) Paque serving as moderator. As she noted, “I have spent most of my life speaking Spanish although English is my native language.” Key participants included Abed Kataya, Kenneth Harry Msiska, Stephanie Borg Psaila from Malta, Karolina Iwańska, Slavica Karajicic, Bimsara Malshan, and Ken Huang, co-founder of Lingo AI from the Singapore Internet Governance Forum.

Participants represented languages including Chinese, German, Samoan, Hindi, Cape Verdean Creole, Chichewa, Swahili, and others, creating a genuinely multilingual environment for the experiment.

## Definitions of Internet Governance

Before the multilingual experiment, participants provided definitions of internet governance in their preferred languages. Abed Kataya emphasized comprehensive collaboration, defining it as involving cooperation between government, the private sector, civil society, and technical communities. Kenneth Harry Msiska referenced the WGIG definition, describing it as establishing rules, policies, and procedures applied jointly by all stakeholders.

Stephanie Borg Psaila offered a different perspective, critiquing the terminology itself for overemphasizing government roles with insufficient attention to civil society participation. Other participants provided complementary definitions: Karolina Iwańska emphasized decentralized management, Slavica Karajicic highlighted the multidisciplinary nature encompassing infrastructure, standards, security, law, economics, development, culture, and human rights, and Bimsara Malshan focused on shared principles and decision-making procedures.

## The Tower of Babel Experiment

The session’s central experiment involved encouraging participants to communicate in their native languages to observe emerging challenges and solutions. Paque initiated this experiment despite being a native English speaker, stating her goal was to highlight issues with English dominance.

The immediate results varied among participants. Some reported confusion during the multilingual phase, with one audience member describing the experience as “chaotic” and noting they could only connect with Swahili speakers. However, Paque herself observed that it was quite controlled chaos.

Participants naturally began clustering into linguistic groups, seeking speakers of languages they could understand. Several participants discovered they were the sole representatives of their native languages at the forum, including Borg Psaila as the only Maltese speaker and others representing Samoan, Cape Verdean Creole, and Chichewa.

## Technological Solutions and AI Capabilities

Ken Huang presented insights on artificial intelligence capabilities, explaining that AI can theoretically process all 7,000 human languages but defaults to English and major languages when data sets prove insufficient. He noted that “AI can design their own native computing languages” distinct from human languages, suggesting possibilities for communication systems that transcend traditional linguistic boundaries.

An audience member added that AI “thinks in mathematics and digital proximity” rather than any specific language, making it potentially culturally neutral. However, participants also noted limitations, with current AI speech recognition handling only about 100 languages with limited effectiveness, and 95% of internet language data existing in English.

The discussion revealed that Google Translate is adding “100 languages every year,” showing progress in technological solutions while acknowledging current constraints.

## The English Dominance Discussion

The session revealed different perspectives on English as a common language. After experiencing the multilingual experiment, one participant concluded that “English is the solution for the chaotic Tower of Babel situation.” A Hindi-speaking participant explained that in India’s multilingual context, English serves as a politically neutral option when native languages carry political implications.

Abed Kataya provided historical context, noting that English is “the third most spoken native language globally” following Chinese Mandarin and Spanish, but serves as the current business language due to power structures. He suggested that language dominance follows historical patterns, with different languages serving as lingua francas in different eras.

However, Paque questioned the fairness of requiring English when most participants are non-native speakers. Borg Psaila proposed alternative approaches, suggesting multiple language options with simultaneous interpretation, similar to UN and EU practices.

## Cross-Linguistic Communication Approaches

The discussion explored alternatives to the binary choice between English dominance and multilingual chaos. Participants identified cross-linguistic communication as a promising approach, where speakers of related languages can communicate in their native tongues while understanding responses in different but related languages. Examples mentioned included “Portuñol, Spanglish” as forms of cross-linguistic communication.

One observation noted that when Spanish and Hindi speakers attempted to communicate, some words were “close enough to English” to facilitate understanding, demonstrating natural bridges between languages.

Paque also raised the topic of internationalized domain names as another aspect of multilingual internet governance that requires consideration.

## Areas of Agreement and Disagreement

Participants demonstrated consensus on several issues, including the need for multistakeholder collaboration in internet governance and acknowledgment that language barriers create challenges in international forums. There was also general agreement that AI offers potential for addressing language barriers while facing current limitations.

However, significant disagreements emerged regarding solutions. The most notable disagreement concerned whether to maintain English as the universal solution or implement multiple language options. Borg Psaila advocated for multiple language choices with interpretation, while others defended English as practical and effective.

Disagreements also emerged about AI capabilities, with varying levels of optimism about technological solutions versus emphasis on current limitations.

## Key Insights and Observations

The experimental approach provided several insights into multilingual communication challenges. The session demonstrated that language barriers create genuine difficulties in international forums, though the severity of these challenges varied among participants. The experiment showed how participants naturally seek linguistic connections and form communication clusters.

The discussion highlighted both the practical effectiveness of English as a common language and concerns about fairness when most participants are non-native speakers. Technological solutions emerged as promising but currently limited, particularly for less common languages and oral communication.

The session also revealed that different participants have varying tolerance for multilingual communication challenges, with some finding creative ways to bridge language gaps while others prefer clear common language solutions.

## Conclusion

This experimental session provided insights into the complex challenges of multilingual communication in internet governance forums. By temporarily suspending English dominance, participants experienced linguistic barriers firsthand and explored various approaches to multilingual communication.

While no definitive solutions emerged, the discussion revealed the trade-offs between inclusion and practicality, the potential and limitations of technological solutions, and the varying perspectives on language choice in international forums. The session demonstrated that addressing communication barriers requires balancing practical communication needs with concerns about fairness and inclusion.

The experiment highlighted that meaningful progress on linguistic inclusion may require a willingness to experiment with established practices while acknowledging both practical constraints and equity concerns in international governance processes.

Session transcript

Virginia (Ginger) Paque: Good morning, buenos dias. Hola, como estan? Soy Ginger para el consorcio CADE, where we will all be working together. We’re starting with the proposition that the largest, strongest challenge to multi-stakeholder inclusion and voices in global processes is communication. This challenge predates the digital divide. It underlies the digital divide. So we will now try to work with that problem because principle two resolving that challenge of communication is the fact that the biggest challenge to communication is language. We have been communicating in CAID in English in spite of the fact that out of almost two dozen people, I am the only person who is a native English speaker. Is that fair? We are proposing now for this session to start with a basic discussion of internet governance without the stipulation of English as the imposed common language. So the rule of English as a common language is now suspended. I can speak English because it’s my native language. If English is internative language, there are no rules. There’s a suspended rule. So I invite you all to participate. We will start with our definitions and I invite my colleagues who will start the discussion with internet governance in their own languages and then we will open the floor and ask each of you to be a participant, a panelist, and an active member. Thank you very much and welcome.

Abed Kataya: The internet governance is the development and implementation of the comprehensive government and private sector, civil society and technical society, all in turn, of the principles, standards, rules, and procedures for decision-making and joint activities that form the development of the internet and its use and the setting of programs that define the form of the internet and its use. The internet governance is an essential issue because of the internet’s potential to enhance the sustainable development of humanity and the building of comprehensive knowledge societies and to enhance the free flow of information and ideas all over the world. And now I leave you with my colleague, Kenneth Harry Msiska.

Kenneth Harry Msiska: Thank you very much. According to the WGIG (Working Group on Internet Governance), governance means establishing rules, policies, and appropriate procedures that are applied jointly by all stakeholders—such as governments, companies, and non-governmental organizations—while respecting principles, frameworks, laws, and decision-making processes, as well as policies that promote governance and its effective implementation.

Virginia (Ginger) Paque: The WIEGIG definition of internet governance stipulates that internet governance is the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and the use of the internet. The WIEGIG definition of internet governance stipulates that internet governance is the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and the use of the internet.

Stephanie Borg Psaila: Internet governance focuses on the role of government in shaping digital policy. The phrase places importance on the role of government in governance, but less on the role of civil society. development and application by governments, the private sector, and civil society in their

Karolina Iwańska: Internet governance uses the same word we would use to describe managing a company, a team, or a crisis. So “Internet governance” emphasizes the decentralized nature of the Internet, rather than its focus on regulations or government institutions.

Slavica Karajicic: Internet governance is the development and application of common principles, norms, rules, decision-making processes and programs that shape the evolution and use of the Internet, by governments, the private sector and civil society, in their specific roles. It is a multidisciplinary field, encompassing many aspects: infrastructure, standards, security, law, economics, development, culture, human rights, etc. (Dictionary of Internet and Communications)

Bimsara Malshan: respective roles of shared principles, norms, rules, decisions-making procedures, and programs that shape the evolution and the use of the internet. The WIEGIG definition of internet governance stipulates that internet governance is the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decisions-making procedures, and programs that shape the evolution and the use of the internet.

Virginia (Ginger) Paque: Thank you very much for listening, we now invite everyone in yours place or get up and join us, the closer you are to the stage, the better talk, the more the camera will capture this experiment in vivo. Again, I’m speaking English because it is my native language, I invite you to remember that we have we suspended English as a common or universal language i invite you to use whatever language, gesture or way of communication that works, we are trying to see what hepen to see what whoul hepeneds if we had honest tower of babel. Would you please join us, I know we have other languages… To simulate what is, how are we going to join silos… There are some questions we are going to consider. This right now is a group exercise, we don’t have individual speakers, we will have individual speakers after this group exercise, so this is a group exercise, so let’s experiment. Okay, I’m hearing a lot of things going on Because of the chaos, which is not that chaotic But I’ve heard a lot of people not understanding each other I invite you to take your places or to stay standing We’ll do a little bit more directed analysis now Where we have people speaking in their own languages Or English or however you want To communicate the best you can As we work out, for instance, any of the questions If I can ask Slavica to please put the questions back on the screen Because that’s where we’ll be going now I hope you have the basic context from our cheat sheet I invite, we have four microphones So if people want to speak, come around But come up this way so the camera can catch you I invite you to tell us your suggestions For instance, I had a gentleman who was I was very pleased, was representing the Chinese language Where are you? He was proposing actually an AI solution I hope you will come up and join us at the mic I invite you to all come up with 30-second interventions Especially representing languages that have not been used And if you have a solution, what worked right now? What didn’t work? Anybody talking sign language? Did anyone find themselves saying, I can’t understand what you’re telling me? So what happened? Come and tell us. 30 seconds each. Come on up. Come and tell us about your solution. You have 30 seconds. Anybody who wants to, mic is open, we have four mics 30 Seconds each to tell us your answers Or your comments. Please go ahead.

Ken Huang: I’m from Lingo AI, I’m the co-founder. We are from the Singapore Internet Governance Forum. We are also the co-founders. So, what language does AI think in? AI is thinking AI can think in every language All 7,000 languages But if we don’t have enough data sets Then it thinks in English And other major languages

Virginia (Ginger) Paque: How interesting. So we have a default language for AI as well

Ken Huang: You know, AI actually can talk in Other than the 7,000 languages. They can design their own Native computing languages. It’s different from human languages

Virginia (Ginger) Paque: And that is such an interesting concept Which is why i invite you all to visit the booth And find out more about this project Because is AI our solution? Is that how we’re going to communicate in the future? What does AI do? So please join the booth to find out more about that possibility Because it’s one of our possible solutions What else do you think? What did you find out? What’s your observation from that flash experiment?

Audience: Excellent experience. Thank you for organizing it It was easy and interesting to find a conversational partner Due to the label So labeling the label of the language that people speak Because majority of people here speak more than one language And we could have had a list of languages On our chests To easily find more communication languages And surprisingly you can find people that speak.

Virginia (Ginger) Paque: What would be the common language? That is very interesting We did encourage people to put their native language Their first language to see the diversity of primary languages But that is an excellent suggestion Because we would have helped direct those silos First of all Which would have formed if we find commonalities And we might also have found That there is an alternative second language Or common language So yes, I like that. Next time we’ll have to try it This is our first experiment We need to know what did you think? What did you hear? What did you find out?

Audience: I have an answer here From the experiment I almost got a migraine Everyone was speaking whatever I was not understanding I was only picking from the Swahili speakers So those are the African languages And from Uganda I think we have the Bantu Or the two languages so they sort of overlap The Spaniards and the rest It’s chaotic.

Virginia (Ginger) Paque: Does anyone dare to share?

Audience: Thank you for the experiment

Virginia (Ginger) Paque: Can you come up so the camera might catch you? Walk up while you’re talking Keep going

Audience: So I saw that during the experiment people from the same nation would gather up Together in small groups like the English speakers I was speaking Chinese with my Chinese fellows I feel like that separates us as a nation Because we would only be in groups of the people we understand So I do think English as a common language Does work in an international context English is the solution for the chaotic Tower of Babel situation So I do think English is the solution for the chaos

Virginia (Ginger) Paque: Thank you What is your native language?

Audience: German but I do speak Chinese

Virginia (Ginger) Paque: So you chose and Chinese is one of the candidates for a common language Except that I have given conferences in situations for Chinese people Who have the common language of English Because they don’t understand each other That happens also in Africa and parts of the US So Samoa, would you please join us? Thank you very much

Audience: Thank you To me it was a confusing one Because I’m the only Samoan here And only me understand my language So in this global platform It should be a common language For all of us can understand each other So as we know that English we say is an international language So we need to learn that so that we can communicate There are some exceptions It depends on the target and the market And the number of people in that space And then you can have that common language Or your native language to communicate But for international ones like here It should be a common language So that we can easily communicate

Virginia (Ginger) Paque: Do I have anyone waiting?

Audience: Hi, so I was talking to the gentleman I don’t know if he’s here He was talking in Spanish and I was talking in Hindi But the thing was some of the words that he was saying Were close enough to English that I could kind of understand So that’s there And then the second thing is in Hindi I don’t even know I looked up what the word for internet is I don’t think there is a word So some of these newer terms there are not even native terms for them that exist And then just coming back to the point of India has so many dialects We have no national language because language is very political And there are some languages like Hindi That seem to be the national language Then there’s pushback from other regions because of culture So English becomes the default language Because it’s politically the most equal So that’s just one point

Virginia (Ginger) Paque: So I have a couple of things Please pardon me for monopolizing the mic And raise your hand when you want to join in Come on up in the meantime English is the de facto second language And perhaps is the best solution because it works There are those who would say English is an imposed language And you use the word political So I ask you to consider And add your comments If you think English has been imposed Or yes, it’s imposed, but it works The other thing I would like to also ask Is how many of you Think you’re the only speaker of your language We know of two, we think Only one Maltese speaker, only one Samoan Only one Polish What language is yours? Can you get to a mic? Can you come up?

Audience: I speak two types of Creole And I think I’m one of the few Cape Verdeans here In Oslo

Virginia (Ginger) Paque: Excellent, so we now have at least four people Who represent, they’re the only representative Of their language That’s a challenge that they have obviously overcome And please feel free to comment And please look for a mic or come up to speak We want to hear your reactions

Audience: I’m from Malawi, and my local language is Chichewa But according to the experience What I try to do is just to use Geese as one of the Speakers of my language And I think that’s a challenge And I think that’s a challenge And I think that’s a challenge And I think that’s a challenge as well as the sign language because I could see like there are different tags so when I look at the person it was just pointing at the language so I could understand like he was telling me like I’m coming from this particular region and what’s your name where are you coming from so it was all about guessing and also about sign language but the experience was so fun and it taught me something that language is the best thing to do more especially in international gatherings like this we need to have a common language common ground which we can use but just like as the portrait of the Tower of Babel if we have different languages it is so hard for us to understand each other and also to pursue a common goal thank you

Virginia (Ginger) Paque: How many people, u,m anyone ready is anyone waiting for a mic please take the mic this is you you are the panellists where are my panellists

Stephanie Borg Psaila: can I hi I’m not only the only one who speaks Maltese I’m probably the only Maltese here right I have never been to an IGF with maybe one or two exceptions where there were other people from Malta at the IGF wherever it’s been organized so I’m a bit isolated but English breaks down that isolation strangely enough it was in Italian that we communicated so we found a second language there or a third I would say I want to challenge the notion of one solution of English being one solution and I would add to our colleague from the Council of Europe who was mentioning choices and I think this is actually the way forward of having choices it is why the UN has several official languages it is why the EU has multiple official language so perhaps rather than saying let’s have English as a common language why not give people more options I’m not saying 1000 option to choose from because that would be taking it a bit too far right but what if in the discussions that we have there are more simultaneous interpretations in at least a handful of different languages that people have choices which language to follow I think we would tick more boxes in terms of participation of people if there were even more languages I’m not expecting any fora to choose Maltese as another possibility that would be wouldn’t make a lot of sense but there are so many other languages which are you know a lot of people know Spanish, Chinese, Swahili, German, French, so many

Virginia (Ginger) Paque: and your point of those of us we can talk to Italians I should explain I spent most of my life speaking Spanish although English is my native language and we have Portunol, Spanglish, well there are many more combinations where we actually do speak our own languages and we are answered in the other language which we can understand so Stephanie that’s an exciting possibility of looking for channels joining in together would you have a comment on that you have you deal with a lot of languages and you had an interesting

Audience: recently I was in a also interesting presentation on large language models and I learned that AI doesn’t think in any language it thinks in terms of maths mathematics the proximity of different notions which are coded in digits so indeed let’s not how to say be fooled by the idea that one language is going to predominate it is mathematics actually all digital or everything which is digital are digits zero or one I think that’s it so it is important to use this platform as I mean the digital world and the AI as a neutral culturally relatively neutral of course it feeds on existing data but that evolves quickly and basically I would say let’s trust and explore this world.

Virginia (Ginger) Paque: and hope that those who are writing the algorithms speak a lot of languages what one I’m sorry for this bad joke I’m going to make at least in digital and binary we only have to learn two letters it’s not like these languages my colleagues just have been speaking that I can’t even read zero and one I can understand anyone else on that and I would go back to a comment do you have a response to that my friend that the AI thank you very much for that addition I think it’s very important to our look search for solutions

Audience: hello everyone my name is Una and I came from China you know in China there’s the official language is Mandarin but actually there are around 60 local languages and nuances and they speak in different ways of the communication so once the Chinese people learning English and when they go to the international platform they are not going to speak and communicate very well because there’s no such very like more practice every day in English so we see that the problem is that not languages is like a barriers it’s not like a barrier not languages is like a barriers it’s not like bridges they can connect everyone that we are in the IGF and we speak different languages around hundreds of languages that hundreds of countries that we are from but we can only speak in English to communicate so we are now do some consumption it’s like how we can connect like speak with our own native languages and we use like some hardwares like earphones can translate all the languages into other native languages let’s see I speak Chinese you speak English and we can understand well in each other native languages so we know that Google Translate has added 100 languages every year for inclusive language development but actually there’s some text to text translation but not on the oral communication so kind of the technologies like AISR can only recognize 100 languages but not in a very high quality that means you cannot rely on that technology so we lack of the data sets because almost 95% of data sets or languages data in the internet is English so the rest of languages and the people are not communicating in their own internet language in the internet that’s the problem so the current logical models understand in English or Spanish or Chinese but not all the other languages so there leads to the buyers that means AI cannot understand your own culture your own native languages or your own native like local like the knowledge there so language is very complex yeah so we are now doing some project research on like community on the community language community side that means all the people from all the world they can contribute their native knowledge in their own native language.

Virginia (Ginger) Paque: that’s amazing that that’s a very eye-opening insight and we really appreciate that especially added to others on that note you mentioned technology and both the advantages of and shortcomings which was excellent I did wonder if we would see people pulling out their phones for Google Translate to help them during the discussion I did not see any cases of that so that’s very interesting if there are techies in the group I I’m always curious whether the use of the internationalized domain names might have so because they they have to rationalize all of these different languages and it comes into I think a combination rather than a dominance because we do of course need our our domain names in languages we can comprehend or we can’t use the internet I have someone else

Abed Kataya: oh yeah it’s me oh so actually uh let me uh explain something that English is the third most spoken language native spoken language in the world it’s not the first one uh the first one is the Chinese Mandarin the second one is Spanish the third one is the English and I think why we are speaking English now especially in the business it’s a business language because it is uh yeah because it is uh like the dominant uh the dominant powers language that’s why and every era has its own language I mean let’s imagine that Arabic has used to be like the dominant language before then Spanish in some eras and then that like so I think maybe next we’re gonna speak Chinese Mandarin we don’t know as our business language.

Virginia (Ginger) Paque: well we definitely have a way forward I would love anyone who can to take but we do need to close the the timer’s yelling at me in red um which makes sense I thank you all for your input we apologize for the chaos but the chaos was important certainly among our thank yous thank you to all of you and we should have a slide with thank you in many languages but I don’t see it thank you to the tech team for supporting us thank you for the online participants who are watching us and can’t raise their voices in whatever language they want thank you all very very much an applause for you

V

Virginia (Ginger) Paque

Speech speed

147 words per minute

Speech length

1723 words

Speech time

702 seconds

Communication challenge predates and underlies the digital divide, with language being the biggest barrier to communication

Explanation

Paque argues that the fundamental challenge to multi-stakeholder inclusion in global processes is communication, which existed before and forms the foundation of the digital divide. She identifies language as the primary obstacle to effective communication in international forums.

Evidence

Out of almost two dozen people in their consortium, she is the only native English speaker, yet they have been communicating exclusively in English

Major discussion point

Language barriers in global internet governance discussions

Topics

Sociocultural

Agreed with

Agreed on

Language barriers create significant challenges in international forums

English dominance is unfair when only one participant is a native English speaker among dozens

Explanation

Paque questions the fairness of using English as the imposed common language when she is the sole native English speaker among nearly two dozen participants. She proposes suspending English as the mandatory common language to demonstrate this inequity.

Evidence

In their CAID consortium of almost two dozen people, she is the only native English speaker

Major discussion point

Language barriers in global internet governance discussions

Topics

Sociocultural

Disagreed with

Disagreed on

Fairness vs. practicality of English dominance

A

Abed Kataya

Speech speed

142 words per minute

Speech length

246 words

Speech time

103 seconds

Internet governance involves comprehensive collaboration between government, private sector, civil society and technical communities in developing principles and standards

Explanation

Kataya defines internet governance as the development and implementation of comprehensive collaboration among all stakeholders including government, private sector, civil society, and technical society. This collaboration focuses on creating principles, standards, rules, and procedures for decision-making that shape internet development and use.

Evidence

Internet governance enhances sustainable development, builds comprehensive knowledge societies, and promotes free flow of information globally

Major discussion point

Definitions and scope of internet governance

Topics

Legal and regulatory

Agreed with

Agreed on

Internet governance requires multi-stakeholder collaboration

English is the third most spoken native language globally, following Chinese Mandarin and Spanish, and serves as the current business language due to dominant power structures

Explanation

Kataya challenges the assumption of English primacy by noting it ranks third among native languages globally. He explains that English functions as the business language because it represents the dominant power’s language, and suggests this could change as power structures shift.

Evidence

Chinese Mandarin is first, Spanish second, English third in native speakers; Arabic was previously dominant, Spanish in some eras, and Chinese Mandarin might be next

Major discussion point

English as a common language solution

Topics

Sociocultural

K

Kenneth Harry Msiska

Speech speed

390 words per minute

Speech length

52 words

Speech time

8 seconds

Internet governance establishes rules, policies and procedures applied jointly by all stakeholders while respecting frameworks and decision-making processes

Explanation

Msiska references the WGIG definition, emphasizing that internet governance means establishing rules, policies, and procedures that are jointly applied by all stakeholders including governments, companies, and non-governmental organizations. This approach must respect existing principles, frameworks, laws, and decision-making processes.

Evidence

References the Working Group on Internet Governance (WGIG) definition

Major discussion point

Definitions and scope of internet governance

Topics

Legal and regulatory

Agreed with

Agreed on

Internet governance requires multi-stakeholder collaboration

S

Stephanie Borg Psaila

Speech speed

128 words per minute

Speech length

312 words

Speech time

145 seconds

Internet governance focuses primarily on government’s role in shaping digital policy with less emphasis on civil society

Explanation

Borg Psaila critiques the term ‘internet governance’ for placing disproportionate importance on government’s role in governance while diminishing the role of civil society. She suggests this terminology creates an imbalance in how different stakeholders are perceived in digital policy-making.

Major discussion point

Definitions and scope of internet governance

Topics

Legal and regulatory

Multiple language options should be provided rather than imposing one common language, similar to UN and EU practices with official languages

Explanation

Borg Psaila challenges the notion of English as the single solution and advocates for providing multiple language choices in international forums. She argues that offering several simultaneous interpretation options would increase participation and inclusion, following the model of UN and EU multilingual practices.

Evidence

UN has several official languages, EU has multiple official languages; mentions Spanish, Chinese, Swahili, German, French as widely spoken languages that could be included

Major discussion point

Alternative approaches to multilingual communication

Topics

Sociocultural

Disagreed with

Disagreed on

English as the universal solution vs. multiple language options

Some participants are the only speakers of their native language at international forums, creating isolation

Explanation

Borg Psaila describes her experience as typically being the only Maltese speaker at Internet Governance Forums, creating isolation that is only broken by using English or finding alternative common languages. This highlights the challenge faced by speakers of less common languages in international settings.

Evidence

She has never been to an IGF with more than one or two other people from Malta; during the experiment, she communicated in Italian rather than English

Major discussion point

Language barriers in global internet governance discussions

Topics

Sociocultural

Agreed with

Agreed on

Language barriers create significant challenges in international forums

K

Karolina Iwańska

Speech speed

380 words per minute

Speech length

38 words

Speech time

6 seconds

Internet governance emphasizes decentralized management rather than focusing on regulations or government institutions

Explanation

Iwańska draws a parallel between internet governance and managing companies, teams, or crises to highlight that the term emphasizes the decentralized nature of the Internet. She argues this perspective focuses on distributed management approaches rather than centralized regulatory or governmental control.

Evidence

Compares internet governance to managing a company, team, or crisis

Major discussion point

Definitions and scope of internet governance

Topics

Legal and regulatory

S

Slavica Karajicic

Speech speed

600 words per minute

Speech length

60 words

Speech time

6 seconds

Internet governance is multidisciplinary, encompassing infrastructure, standards, security, law, economics, development, culture and human rights

Explanation

Karajicic provides a comprehensive definition emphasizing that internet governance involves the development and application of common principles, norms, rules, and decision-making processes by multiple stakeholders. She stresses its multidisciplinary nature, covering a broad range of areas from technical infrastructure to human rights.

Evidence

References the Dictionary of Internet and Communications; lists specific areas: infrastructure, standards, security, law, economics, development, culture, human rights

Major discussion point

Definitions and scope of internet governance

Topics

Legal and regulatory

Agreed with

Agreed on

Internet governance requires multi-stakeholder collaboration

B

Bimsara Malshan

Speech speed

129 words per minute

Speech length

67 words

Speech time

31 seconds

Internet governance involves shared principles, norms, rules and decision-making procedures that shape internet evolution and use

Explanation

Malshan reiterates the WGIG definition, emphasizing that internet governance is about the development and application of shared principles, norms, rules, and decision-making procedures by governments, private sector, and civil society. These elements work together to shape how the internet evolves and is used.

Evidence

References the WGIG definition multiple times

Major discussion point

Definitions and scope of internet governance

Topics

Legal and regulatory

Agreed with

Agreed on

Internet governance requires multi-stakeholder collaboration

K

Ken Huang

Speech speed

97 words per minute

Speech length

80 words

Speech time

49 seconds

AI can theoretically think in all 7,000 languages but defaults to English and major languages when data sets are insufficient

Explanation

Huang explains that while AI has the capability to process all 7,000 human languages, it defaults to English and other major languages when there isn’t sufficient data available for less common languages. This creates a bias toward dominant languages in AI systems.

Evidence

Co-founder of lingo AI from Singapore Internet Governance Forum; mentions specific number of 7,000 languages

Major discussion point

Technology and AI solutions for language barriers

Topics

Infrastructure

Disagreed with

Disagreed on

AI language capabilities and limitations

AI can create its own native computing languages different from human languages

Explanation

Huang reveals that AI systems can develop their own native computing languages that are distinct from human languages. This suggests AI communication methods that transcend traditional human linguistic barriers.

Evidence

Expertise as co-founder of lingo AI

Major discussion point

Technology and AI solutions for language barriers

Topics

Infrastructure

A

Audience

Speech speed

145 words per minute

Speech length

1224 words

Speech time

503 seconds

Language separation creates national silos where people only communicate within their linguistic groups

Explanation

An audience member observed that during the multilingual experiment, people naturally gathered in groups based on their shared languages, with English speakers, Chinese speakers, and others forming separate clusters. This separation reinforces national divisions rather than promoting international collaboration.

Evidence

Direct observation from the experiment where people from the same nation gathered together in small groups

Major discussion point

Language barriers in global internet governance discussions

Topics

Sociocultural

Agreed with

Agreed on

Language barriers create significant challenges in international forums

Chaos and confusion result when people cannot understand each other in multilingual settings

Explanation

Multiple audience members reported experiencing confusion, near-migraines, and chaos during the multilingual experiment. They could only understand speakers of languages they knew, leading to fragmented communication and difficulty following discussions.

Evidence

One participant reported almost getting a migraine and only picking up Swahili speakers; another described the situation as chaotic

Major discussion point

Language barriers in global internet governance discussions

Topics

Sociocultural

Agreed with

Agreed on

Language barriers create significant challenges in international forums

English serves as the de facto second language and works effectively as a practical solution

Explanation

Several audience members argued that English functions effectively as a common international language and provides a practical solution to the Tower of Babel situation. They emphasized that English works as a communication bridge in international contexts where multiple languages create barriers.

Evidence

One German speaker who also speaks Chinese advocated for English as the solution; a Samoan speaker emphasized the need for a common language in international platforms

Major discussion point

English as a common language solution

Topics

Sociocultural

Disagreed with

Disagreed on

Fairness vs. practicality of English dominance

English functions as a politically neutral default language in multilingual contexts like India where native languages are politically charged

Explanation

An audience member from India explained that English serves as a politically neutral option in countries with multiple languages and dialects. In India’s case, with no official national language and political tensions around language choices, English becomes the most equitable default option.

Evidence

India has many dialects with no national language; language is very political with pushback against Hindi from other regions; some newer terms like ‘internet’ don’t have native equivalents

Major discussion point

English as a common language solution

Topics

Sociocultural

Disagreed with

Disagreed on

Fairness vs. practicality of English dominance

English is necessary as a common language for international gatherings to pursue common goals

Explanation

Audience members argued that international forums require a common language to enable participants to understand each other and work toward shared objectives. They referenced the Tower of Babel as an example of how language diversity can prevent achieving common goals.

Evidence

Reference to the Tower of Babel story; examples from participants representing languages like Chichewa, Cape Verdean Creole

Major discussion point

English as a common language solution

Topics

Sociocultural

Disagreed with

Disagreed on

English as the universal solution vs. multiple language options

AI thinks in mathematics and digit proximity rather than any specific language, making it culturally neutral

Explanation

An audience member explained that AI systems don’t actually think in human languages but rather in mathematical terms and digit proximity relationships. This mathematical foundation makes AI potentially more culturally neutral than human language-based communication, though it still depends on existing data inputs.

Evidence

Reference to a presentation on large language models; explanation that digital systems use zeros and ones

Major discussion point

Technology and AI solutions for language barriers

Topics

Infrastructure

Cross-linguistic communication is possible when people speak related languages and can understand each other while speaking their native tongues

Explanation

Audience members demonstrated that speakers of related languages can sometimes communicate effectively even when each person speaks their native language. This suggests alternative communication models that don’t require a single common language.

Evidence

Hindi and Spanish speakers found some common words close to English; Italian was used as a bridge language

Major discussion point

Alternative approaches to multilingual communication

Topics

Sociocultural

Current translation technology lacks high-quality oral communication capabilities and sufficient data sets for most languages, with 95% of internet language data being in English

Explanation

A Chinese participant explained that while Google Translate adds 100 languages annually, current AI translation technology cannot reliably handle oral communication for most languages due to insufficient data sets. The dominance of English in internet data (95%) creates significant bias in AI language models.

Evidence

Google Translate adds 100 languages yearly but lacks oral communication quality; AI speech recognition only covers 100 languages at low quality; 95% of internet language data is in English

Major discussion point

Technology and AI solutions for language barriers

Topics

Infrastructure

Disagreed with

Disagreed on

AI language capabilities and limitations

Community-based projects can help people contribute native knowledge in their own languages to address AI language bias

Explanation

An audience member proposed community-driven solutions where people from around the world can contribute knowledge in their native languages to AI systems. This approach could help address the current bias toward English and major languages in AI training data.

Evidence

Current AI models understand English, Spanish, Chinese but not other languages; lack of cultural and native knowledge representation in AI systems

Major discussion point

Alternative approaches to multilingual communication

Topics

Infrastructure

Agreements

Agreement points

Internet governance requires multi-stakeholder collaboration

Internet governance involves comprehensive collaboration between government, private sector, civil society and technical communities in developing principles and standards

Internet governance establishes rules, policies and procedures applied jointly by all stakeholders while respecting frameworks and decision-making processes

Internet governance is multidisciplinary, encompassing infrastructure, standards, security, law, economics, development, culture and human rights

Internet governance involves shared principles, norms, rules and decision-making procedures that shape internet evolution and use

All speakers agree that internet governance fundamentally requires collaboration among multiple stakeholders including governments, private sector, civil society, and technical communities to develop shared principles, norms, and decision-making procedures

Legal and regulatory

Language barriers create significant challenges in international forums

Communication challenge predates and underlies the digital divide, with language being the biggest barrier to communication

Some participants are the only speakers of their native language at international forums, creating isolation

Chaos and confusion result when people cannot understand each other in multilingual settings

Language separation creates national silos where people only communicate within their linguistic groups

Speakers consistently acknowledge that language barriers pose fundamental challenges to effective communication and participation in international internet governance discussions

Sociocultural

Similar viewpoints

Both speakers challenge the dominance of English as the sole common language and advocate for more inclusive multilingual approaches in international forums

English dominance is unfair when only one participant is a native English speaker among dozens

Multiple language options should be provided rather than imposing one common language, similar to UN and EU practices with official languages

Sociocultural

Technology and AI solutions have potential for addressing language barriers but currently face significant limitations due to data bias toward English and major languages

AI can theoretically think in all 7,000 languages but defaults to English and major languages when data sets are insufficient

AI thinks in mathematics and digit proximity rather than any specific language, making it culturally neutral

Current translation technology lacks high-quality oral communication capabilities and sufficient data sets for most languages, with 95% of internet language data being in English

Infrastructure

English functions as a practical common language solution despite not being the most widely spoken native language, serving as a politically neutral option in complex multilingual contexts

English is the third most spoken native language globally, following Chinese Mandarin and Spanish, and serves as the current business language due to dominant power structures

English serves as the de facto second language and works effectively as a practical solution

English functions as a politically neutral default language in multilingual contexts like India where native languages are politically charged

Sociocultural

Unexpected consensus

English as both problem and solution

English dominance is unfair when only one participant is a native English speaker among dozens

English serves as the de facto second language and works effectively as a practical solution

English functions as a politically neutral default language in multilingual contexts like India where native languages are politically charged

English is necessary as a common language for international gatherings to pursue common goals

Despite initial criticism of English dominance, there emerged unexpected consensus that English, while problematic, serves as an effective practical solution for international communication. Even those who challenged its dominance acknowledged its utility

Sociocultural

Technology limitations despite AI potential

AI can theoretically think in all 7,000 languages but defaults to English and major languages when data sets are insufficient

Current translation technology lacks high-quality oral communication capabilities and sufficient data sets for most languages, with 95% of internet language data being in English

Despite presenting AI as a potential solution, there was unexpected consensus that current technology actually reinforces language inequalities due to data bias, making it less viable as an immediate solution than initially suggested

Infrastructure

Overall assessment

Summary

The discussion revealed strong consensus on the fundamental challenges of language barriers in internet governance and the need for multi-stakeholder collaboration, but also unexpected agreement that English, despite its problematic dominance, remains the most practical current solution

Consensus level

High consensus on problem identification and moderate consensus on solutions. The implications suggest that while participants recognize the inequity of English dominance, they also acknowledge practical constraints that make immediate alternatives difficult to implement. This creates a tension between idealistic multilingual goals and pragmatic communication needs in international internet governance forums.

Differences

Different viewpoints

English as the universal solution vs. multiple language options

Multiple language options should be provided rather than imposing one common language, similar to UN and EU practices with official languages

English serves as the de facto second language and works effectively as a practical solution

English is necessary as a common language for international gatherings to pursue common goals

Borg Psaila advocates for multiple language choices with simultaneous interpretation in several languages, while audience members argue that English works effectively as a single common language solution for international forums

Sociocultural

Fairness vs. practicality of English dominance

English dominance is unfair when only one participant is a native English speaker among dozens

English serves as the de facto second language and works effectively as a practical solution

English functions as a politically neutral default language in multilingual contexts like India where native languages are politically charged

Paque questions the fairness of English dominance when most participants are non-native speakers, while audience members defend English as a practical and politically neutral solution that works effectively

Sociocultural

AI language capabilities and limitations

AI can theoretically think in all 7,000 languages but defaults to English and major languages when data sets are insufficient

Current translation technology lacks high-quality oral communication capabilities and sufficient data sets for most languages, with 95% of internet language data being in English

Huang presents AI as having broad language capabilities across 7,000 languages, while an audience member emphasizes significant limitations in current AI translation technology, particularly for oral communication and less common languages

Infrastructure

Unexpected differences

Definition emphasis in internet governance

Internet governance focuses primarily on government’s role in shaping digital policy with less emphasis on civil society

Internet governance involves comprehensive collaboration between government, private sector, civil society and technical communities in developing principles and standards

Internet governance emphasizes decentralized management rather than focusing on regulations or government institutions

While most speakers provided standard multi-stakeholder definitions of internet governance, Borg Psaila uniquely critiqued the terminology itself for overemphasizing government roles. This was unexpected as it challenged the fundamental framing rather than just the content of internet governance definitions

Legal and regulatory

Historical context of language dominance

English is the third most spoken native language globally, following Chinese Mandarin and Spanish, and serves as the current business language due to dominant power structures

English serves as the de facto second language and works effectively as a practical solution

Kataya’s historical perspective on language dominance cycles (Arabic, Spanish, English, potentially Chinese) was unexpected as it reframed the English dominance debate from a practical communication issue to a broader discussion of power structures and historical patterns

Sociocultural

Overall assessment

Summary

The main areas of disagreement center on language solutions for international communication, with fundamental tensions between fairness/inclusion versus practicality/efficiency, and between single-language versus multi-language approaches

Disagreement level

Moderate to high disagreement with significant implications. The disagreements reveal deeper tensions about power structures, cultural representation, and practical governance in international forums. These disagreements could impact policy decisions about language accommodation, technology investment priorities, and the fundamental approach to inclusive participation in internet governance processes

Partial agreements

Partial agreements

Similar viewpoints

Both speakers challenge the dominance of English as the sole common language and advocate for more inclusive multilingual approaches in international forums

English dominance is unfair when only one participant is a native English speaker among dozens

Multiple language options should be provided rather than imposing one common language, similar to UN and EU practices with official languages

Sociocultural

Technology and AI solutions have potential for addressing language barriers but currently face significant limitations due to data bias toward English and major languages

AI can theoretically think in all 7,000 languages but defaults to English and major languages when data sets are insufficient

AI thinks in mathematics and digit proximity rather than any specific language, making it culturally neutral

Current translation technology lacks high-quality oral communication capabilities and sufficient data sets for most languages, with 95% of internet language data being in English

Infrastructure

English functions as a practical common language solution despite not being the most widely spoken native language, serving as a politically neutral option in complex multilingual contexts

English is the third most spoken native language globally, following Chinese Mandarin and Spanish, and serves as the current business language due to dominant power structures

English serves as the de facto second language and works effectively as a practical solution

English functions as a politically neutral default language in multilingual contexts like India where native languages are politically charged

Sociocultural

Takeaways

Key takeaways

Language barriers are a fundamental challenge in global internet governance that predates and underlies the digital divide

English dominance in international forums creates unfairness when most participants are non-native speakers

Multilingual communication without a common language leads to chaos and formation of linguistic silos where people only communicate within their language groups

Many participants at international forums are isolated as the sole representatives of their native languages

Internet governance is defined as a multistakeholder process involving governments, private sector, civil society, and technical communities in developing shared principles and standards

AI and technology offer potential solutions but currently default to English due to insufficient data sets in other languages, with 95% of internet language data being in English

English serves as a practical common language solution despite being imposed, functioning as a politically neutral option in multilingual contexts

Cross-linguistic communication is possible between related languages where speakers can understand each other while using their native tongues

Resolutions and action items

Participants were invited to visit the AI booth to learn more about technological solutions for language barriers

Suggestion to implement language labeling systems in future experiments to help people find common communication languages

Proposal to provide multiple language options with simultaneous interpretation in several major languages rather than imposing one common language

Community-based projects should be developed to allow people to contribute native knowledge in their own languages to address AI language bias

Unresolved issues

Whether English should continue as the imposed common language or if alternative multilingual approaches should be adopted

How to address the technological limitations of current translation systems, particularly for oral communication

How to develop sufficient data sets for the thousands of underrepresented languages in AI systems

How to balance practical communication needs with linguistic diversity and cultural preservation in international forums

Whether AI will eventually provide a universal solution to language barriers or if human-centered approaches are needed

How to address the political and cultural implications of language dominance in global governance

What the future common language might be as global power structures shift (Chinese Mandarin was suggested as a possibility)

Suggested compromises

Providing multiple language options with simultaneous interpretation in several major languages instead of enforcing one common language

Using language labeling systems to help participants identify shared languages and form communication bridges

Accepting English as a practical solution while acknowledging its imposed nature and working toward more inclusive alternatives

Leveraging AI and technology as culturally neutral tools while building better data sets for underrepresented languages

Allowing cross-linguistic communication where speakers of related languages can communicate in their native tongues while understanding each other

Combining technological solutions with human-centered approaches to address both practical and cultural needs

Thought provoking comments

So what language does AI think in? AI is thinking AI can think in every language All 7,000 languages But if we don’t have enough data sets Then it thinks in English And other major languages… AI actually can talk in Other than the 7,000 languages They can design their own Native computing languages It’s different from human languages

Speaker

Ken Huang

Reason

This comment fundamentally reframes the language barrier discussion by introducing AI as both a potential solution and a new complexity. It reveals that AI has its own linguistic limitations (defaulting to English due to data availability) while also having capabilities beyond human languages through native computing languages.

Impact

This shifted the conversation from purely human-centered language solutions to technological possibilities. It prompted Virginia to immediately recognize this as a potential solution worth exploring further, leading to discussion about visiting their booth and whether ‘AI is our solution’ for future communication.

I almost got a migraine Everyone was speaking whatever I was not understanding I was only picking from the Swahili speakers So those are the African languages… It’s chaotic

Speaker

Audience member

Reason

This brutally honest reaction captures the real human cost of language barriers – the physical and emotional stress of being excluded from communication. It provides visceral evidence of why the Tower of Babel experiment was meaningful.

Impact

This comment grounded the theoretical discussion in lived experience, validating the premise that language barriers create genuine suffering and exclusion. It reinforced the urgency of finding solutions and made the abstract concept of communication challenges tangible.

So I do think English as a common language Does work in an international context English is the solution for the chaotic Tower of Babel situation So I do think English is the solution for the chaos

Speaker

German/Chinese speaking audience member

Reason

Coming from someone whose native language is German but who chose to communicate in Chinese during the experiment, this represents a pragmatic conclusion based on direct experience. It’s particularly insightful because it comes from someone who experienced the chaos firsthand and made a reasoned choice.

Impact

This comment introduced the first strong argument for English as a practical solution, setting up a debate between idealism (multilingual inclusion) and pragmatism (English as lingua franca) that continued throughout the discussion.

And then just coming back to the point of India has so many dialects We have no national language because language is very political… English becomes the default language Because it’s politically the most equal

Speaker

Hindi-speaking audience member

Reason

This comment reveals the deep political dimensions of language choice, showing how English can paradoxically serve as a neutral option when local languages carry political baggage. It also highlights how newer concepts like ‘internet’ may not have native language equivalents.

Impact

This shifted the discussion from viewing English as purely imposed to understanding it as sometimes politically neutral. It introduced the concept of language politics and the practical reality that technical terms often lack native equivalents, adding nuance to the debate.

I want to challenge the notion of one solution of English being one solution… why not give people more options… what if in the discussions that we have there are more simultaneous interpretations in at least a handful of different languages that people have choices

Speaker

Stephanie Borg Psaila

Reason

This comment reframes the entire debate from a binary choice (English vs. native languages) to a multiple-choice solution. Drawing on UN and EU models, it offers a practical middle ground that acknowledges both inclusion needs and practical constraints.

Impact

This comment elevated the discussion from either/or thinking to both/and solutions, introducing the concept of strategic multilingualism. It moved the conversation toward more sophisticated policy solutions rather than simple language dominance.

AI doesn’t think in any language it thinks in terms of maths mathematics the proximity of different notions which are coded in digits… it is mathematics actually all digital or everything which is digital are digits zero or one… let’s trust and explore this world

Speaker

Audience member

Reason

This comment provides a fundamental insight into how AI actually processes language – not linguistically but mathematically. It suggests that digital solutions might be culturally neutral in ways human languages cannot be.

Impact

This deepened the technological discussion by explaining the mathematical foundation of AI language processing. It offered hope for truly neutral communication tools while also prompting Virginia’s humorous observation that binary language only requires learning ‘two letters.’

Almost 95% of data sets or languages data in the internet is English so the rest of languages and the people are not communicating in their own internet language… AI cannot understand your own culture your own native languages or your own native like local like the knowledge

Speaker

Una from China

Reason

This comment exposes the fundamental data bias in AI systems and connects it to cultural preservation. It reveals how technological solutions may perpetuate rather than solve linguistic inequality, while also describing community-based solutions.

Impact

This comment brought the discussion full circle by showing how even technological solutions reflect existing power imbalances. It introduced the concept of community-driven language preservation and highlighted the cultural dimensions of the digital divide.

Overall assessment

These key comments transformed what began as an experimental demonstration into a sophisticated multilayered discussion about language, power, technology, and inclusion. The conversation evolved from experiencing the chaos of multilingual communication to exploring three distinct solution pathways: pragmatic acceptance of English dominance, strategic multilingualism with multiple official languages, and technological solutions through AI. The most impactful comments revealed the political dimensions of language choice, the limitations and possibilities of AI solutions, and the deep connection between language access and cultural preservation. Together, they demonstrated that the language barrier in global governance is not just a communication problem but a complex intersection of politics, technology, culture, and power that requires nuanced, multi-pronged solutions rather than simple universal fixes.

Follow-up questions

Is AI our solution for multilingual communication? How will we communicate in the future using AI?

Speaker

Virginia (Ginger) Paque

Explanation

This question emerged after Ken Huang presented AI’s capability to think in multiple languages, prompting exploration of whether AI could solve multilingual communication challenges in global governance

What language does AI think in, and how does it handle the 7,000 human languages?

Speaker

Ken Huang

Explanation

This raises important questions about AI’s linguistic capabilities and limitations, particularly regarding data sets and default languages in AI systems

How can we develop better oral communication translation technology beyond current text-to-text translation?

Speaker

Una (from China)

Explanation

Current AI speech recognition can only handle about 100 languages with limited quality, creating a gap in real-time multilingual communication

How can we address the data imbalance where 95% of internet language data is in English?

Speaker

Una (from China)

Explanation

This imbalance affects AI’s ability to understand and process non-English languages and cultures, creating barriers to inclusive communication

Could internationalized domain names provide insights for multilingual internet governance?

Speaker

Virginia (Ginger) Paque

Explanation

Domain names must work across different languages and writing systems, potentially offering lessons for broader multilingual communication solutions

How can we create community-driven projects where people contribute native knowledge in their own languages?

Speaker

Una (from China)

Explanation

This addresses the need for more inclusive data collection and knowledge sharing that preserves cultural and linguistic diversity

What would happen if we provided multiple language options with simultaneous interpretation rather than defaulting to English?

Speaker

Stephanie Borg Psaila

Explanation

This challenges the single-language approach and explores how offering choices in major languages could improve participation in global forums

How can we better utilize cross-language communication where people speak their native language and understand responses in another language?

Speaker

Virginia (Ginger) Paque and Stephanie Borg Psaila

Explanation

This explores the phenomenon of asymmetric multilingual communication as a potential solution to language barriers

Is English an imposed language or simply the most practical solution for international communication?

Speaker

Virginia (Ginger) Paque

Explanation

This addresses the political and practical dimensions of language choice in global governance, questioning whether English dominance is problematic or pragmatic

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #255 Update Required Fixing Tech Sectors Role in Conflict

Day 0 Event #255 Update Required Fixing Tech Sectors Role in Conflict

Session at a glance

Summary

This discussion, titled “Update Required,” focused on ensuring tech companies respect international humanitarian law and evolving standards regarding private sector roles in armed conflicts. The panel featured experts Marwa Fatafta from Access Now, Chantal Joris from Article 19, and Kiran Aziz from KLP, a Norwegian pension fund, discussing corporate accountability in conflict zones.


Fatafta emphasized that tech companies are never neutral actors in armed conflicts, outlining three ways they contribute to harm: directly causing human rights violations through censorship, providing technological assistance to military forces, and mirroring state policies of discrimination. She cited examples from Gaza, including Google and Amazon’s Project Nimbus providing cloud services to Israeli military, and Microsoft supplying engineering services to defense units. Despite civil society pressure, she noted no meaningful positive changes in corporate behavior, with companies increasingly dropping voluntary commitments against military AI development.


Joris explained the legal framework, noting that both international humanitarian law and human rights law apply during conflicts, with enforcement primarily through international criminal law and domestic courts. She highlighted challenges in attribution and evidence gathering, particularly as tech companies become more integrated with military operations. The discussion revealed that corporate executives could theoretically face liability under international criminal law, though few precedents exist.


Aziz described investor perspectives, explaining how institutional investors rely on public information and civil society reports to assess risks. She noted the extreme difficulty in engaging tech companies compared to traditional sectors, leading to exclusions from investment portfolios when companies fail to respond to human rights concerns. The panel concluded that stronger government regulation, transparency requirements, and strategic litigation are essential for meaningful corporate accountability in the tech sector.


Keypoints

## Major Discussion Points:


– **Tech companies’ direct involvement in armed conflicts**: Discussion of how technology companies are not neutral actors but actively contribute to conflicts through providing cloud computing services, AI tools, facial recognition technologies, and other services to military forces, with specific examples from the Gaza conflict including Google’s Project Nimbus and Microsoft’s engineering services to Israeli military units.


– **Legal frameworks and enforcement challenges**: Examination of how international humanitarian law (IHL) and human rights law apply to tech companies, the difficulties in establishing corporate accountability under current legal systems, and the potential for strategic litigation through domestic courts, international criminal law, and investor pressure.


– **Corporate transparency and due diligence failures**: Analysis of tech companies’ extremely low response rates to civil society inquiries (4% compared to 26% for companies in Russia/Ukraine conflicts), their refusal to conduct meaningful human rights due diligence, and their lack of transparency about operations in conflict zones.


– **Evidence requirements for accountability**: Discussion of what types of evidence are needed to hold tech companies accountable, including impact stories, corporate relationship mapping, government contract transparency, and the burden of proof challenges in different legal contexts.


– **Increasing militarization of civilian tech**: Concern about the trend of tech companies dropping voluntary commitments against military applications, forming partnerships with defense contractors, and executives joining military units, blurring the lines between civilian technology and military operations.


## Overall Purpose:


The discussion aimed to explore avenues for ensuring tech companies respect international humanitarian law and to develop strategies for corporate accountability in the technology sector’s role in armed conflicts. The session sought to identify enforcement mechanisms, evidence requirements, and collaborative approaches between civil society, investors, and legal systems to address the largely unchecked influence of tech companies in conflict situations.


## Overall Tone:


The discussion maintained a serious, urgent, and somewhat frustrated tone throughout. Speakers expressed deep concern about the lack of corporate accountability and transparency, with particular frustration about companies’ unwillingness to engage meaningfully with civil society. The tone was analytical and solution-oriented, with participants sharing expertise and brainstorming practical approaches, but there was an underlying sense of urgency given the ongoing conflicts and the increasing integration of technology into warfare. The atmosphere was collaborative among panelists and audience members, united in their concern about the current state of corporate responsibility in the tech sector.


Speakers

– **Meredith Veit**: Session moderator, leads the discussion on tech companies and international humanitarian law


– **Marwa Fatafta**: From Access Now, leads policy and advocacy work on digital rights in the Middle East and North Africa, has written extensively on digital occupation in Palestine and focuses on the role of new technologies in armed conflicts


– **Kiran Aziz**: Representative from KLP (Norwegian pension fund), works on investor engagement and exclusion policies related to human rights violations


– **Chantal Joris**: From Article 19, senior legal officer focusing on platform regulation and freedom of expression


– **Audience**: Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Dr. Pichamon Yeophantong**: Mentioned multiple times in transcript but role/expertise not clearly defined


– **Phillipe Stoll**: Mentioned multiple times in transcript but role/expertise not clearly defined


– **Jalal Abukhater**: Mentioned in transcript but role/expertise not clearly defined


– **Annette Esserhausen**: Audience member, works with the Association for Progressive Communications


– **Monika Ermert**: Audience member, reporter


– **Audrey Moklay**: Audience member, from Open Mic


– **Sadhana**: Audience member who asked a question about the Genocide Convention


Full session report

# Update Required: Ensuring Tech Companies Respect International Humanitarian Law


## Executive Summary


The panel discussion “Update Required” examined the critical issue of ensuring technology companies comply with international humanitarian law in armed conflicts. Moderated by Meredith Veit, the session brought together experts including Marwa Fatafta from Access Now, who leads policy and advocacy work on digital rights in the Middle East and North Africa; Chantal Joris from Article 19, focusing on platform regulation and freedom of expression; and Kiran Aziz from KLP, a Norwegian pension fund, who works on investor engagement regarding human rights violations.


The discussion revealed that technology companies are increasingly active participants in armed conflicts rather than neutral service providers. A particularly striking finding was the dramatically low response rate from tech companies to accountability inquiries—only 4% compared to 26% for similar outreach regarding the Russia-Ukraine conflict. The panel explored multiple avenues for accountability, from legal frameworks to investor pressure, while acknowledging significant barriers including corporate opacity and government protection of domestic tech companies.


*Note: This summary is based on a transcript with significant technical issues and garbled sections, particularly affecting the complete capture of some speakers’ contributions.*


## The Challenge of Tech Company Neutrality


Marwa Fatafta fundamentally challenged the notion of tech company neutrality, stating: “Tech companies are never neutral actors in situations of armed conflict. They exacerbate the dynamics of the conflict and sometimes even drive them or fuel them, particularly in contexts where there are asymmetries of power between warring parties.”


Fatafta outlined three primary ways tech companies contribute to harm in conflict zones:


**Direct Human Rights Violations**: Companies engage in systematic censorship and content removal that mirrors state policies of discrimination. In the Palestine context, this includes widespread removal of Palestine-related content and suppression of documentation of human rights violations.


**Technological Assistance to Military Forces**: Fatafta provided specific examples from the Gaza conflict, including Google and Amazon’s Project Nimbus, described as a $1.2 billion contract providing cloud computing services to the Israeli military, and Microsoft’s provision of 19,000 hours of engineering and consultancy services to Israeli defense units including “Unit A200 and Unit 9900.”


**Mirroring State Policies**: Technology companies replicate discriminatory state policies through their service provision, including differential geographic service availability and varying representations of occupied territories.


## The Militarization Trend


A concerning development highlighted by Fatafta was the increasing militarization of civilian technology companies: “There’s a surge in increasing militarization of civilian tech… both Google and OpenAI have both quietly dropped their voluntary commitments earlier this year not to build AI for military use or surveillance purposes.”


She also noted the direct integration of tech executives into military structures: “Senior executives from high-tech firms, specifically Meta, OpenAI and Palantir, are joining the US Army Reserve at a new unit called Executive Innovation Corp.”


This trend represents a fundamental shift in the technology sector’s relationship with military operations, moving from maintaining ethical boundaries to actively pursuing defense partnerships.


## Legal Framework Perspectives


Chantal Joris provided legal context on how international humanitarian law applies to technology companies, though the transcript quality limits the complete capture of her contributions. She discussed the potential for corporate executives to face liability under international criminal law, noting that “corporate executives, in theory, under the very, very high thresholds that are under the Rome Statute could be liable under international criminal law.”


Joris emphasized the importance of government transparency, noting that many service contracts fall under national security exemptions, limiting access to crucial evidence needed for accountability efforts.


## The Accountability Gap


Perhaps the most striking revelation was the extent of corporate resistance to accountability measures. As noted in the opening remarks, there was “an astonishingly low 4% response rate from companies,” which was described as “unprecedented,” particularly when compared to the 26% response rate for similar outreach regarding tech companies operating in Russia and Ukraine.


Fatafta noted that even when companies claim to conduct human rights due diligence, these processes are fundamentally flawed: “Even when companies claim to conduct audits, they lack insight into how their technologies are used, making due diligence ineffective.”


## Investor Perspectives


Kiran Aziz provided insights into how institutional investors approach tech company accountability. She explained that “institutional investors rely on long-term perspectives that incorporate material risks including human rights violations as financial risks.”


However, investors face significant challenges due to corporate opacity: “Investors depend heavily on civil society reports and public domain information since companies provide inadequate reporting on human rights impacts.”


Aziz noted that “exclusion of companies from investment portfolios can be effective when done transparently with public documentation of reasons,” but emphasized that “tech companies are increasingly difficult to engage with compared to traditional sectors, often only referencing policies without discussing concrete matters.”


## Government Protection and Political Barriers


The discussion revealed how government policies actively shield tech companies from accountability measures. Fatafta highlighted the political dimensions: “The Trump administration is taking an extremely protectionist approach to their tech sector… they will not grant visas to foreign officials who have mandated quote-unquote censorship by these companies.”


This government protection creates significant barriers to international accountability efforts and reflects the strategic importance of tech companies to national competitiveness.


## Evidence and Documentation Challenges


The discussion emphasized the critical importance of evidence gathering for accountability efforts. Audience members stressed the need for:


– **Impact stories** showing how specific corporate actions led to concrete human rights violations


– **Corporate relationship mapping** to understand broader patterns of partnerships


– **Hard evidence** including contracts and internal communications for litigation


– **Risk assessment documentation** for investor engagement


The challenge is that different accountability mechanisms require different types of evidence, but corporate opacity makes gathering any form of evidence extremely difficult.


## Audience Engagement and Practical Concerns


Significant portions of the discussion involved audience questions and responses, reflecting concerns about:


– The effectiveness of different accountability mechanisms


– The role of documentation in building cases against tech companies


– Strategies for overcoming corporate resistance to engagement


– The adequacy of current legal frameworks for addressing tech sector challenges


## Areas of Consensus


Despite different professional backgrounds, the panelists demonstrated consensus on several critical issues:


– Tech companies are not neutral actors in conflicts


– Voluntary corporate responsibility frameworks have failed


– Corporate transparency is inadequate, with unprecedented resistance to accountability


– Current due diligence frameworks are insufficient for the tech sector


– The militarization trend is deeply concerning


## Unresolved Challenges


The discussion highlighted several unresolved questions:


– How to effectively regulate US-based tech companies given government protectionism


– What burden of proof standards should apply to corporate due diligence


– How to address attribution challenges when tech executives integrate into military structures


– How to access information protected under national security classifications


## Conclusion


The “Update Required” discussion revealed the significant challenges in holding technology companies accountable for their roles in armed conflicts. The combination of corporate resistance, government protection, and inadequate legal frameworks creates substantial barriers to accountability.


The speakers’ consensus suggests that incremental reforms are insufficient and that systemic change is required, including new legal frameworks specifically designed for the tech sector and coordinated international action. The path forward requires acknowledging that tech companies are active conflict participants and developing appropriate accountability mechanisms.


The discussion’s title proves apt—an update is indeed required not just for tech companies’ practices, but for the entire ecosystem of accountability mechanisms needed to address the unprecedented challenges posed by the technology sector’s role in armed conflicts.


*This summary reflects the content available in the provided transcript, which contained significant technical issues and incomplete sections that may have affected the complete capture of all speakers’ contributions.*


Session transcript

Meredith Veit: Welcome to the session, Update Required. We’re going to discuss avenues for ensuring that tech companies respect international humanitarian law, as well as evolving international norms and standards regarding the role of private sector actors in conflict. I have three fantastic experts here that are going to help guide us through the discussion today. Get prepared. This is going to be quite active. We have mics here on either side of the room. So we expect a lot of audience participation, given all the expertise as well out there and online. First we have Marwa Fatafta from Access Now. She leads their policy and advocacy work on digital rights in the Middle East and North Africa. And she’s written extensively on the digital occupation in Palestine and focuses on the role of new technologies in armed conflicts. We also have Chantal Jordis from Article 19. She is a senior legal officer focusing on platform regulation and freedom of expression in… Meredith Veit, Marwa Fatafta, Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Dr. Pichamon Yeophantong, the U.N. Guiding Principles on Business and Human Rights. In 2014, we conducted a survey on heightened and Dr. Pichamon Yeophantong. We have a total of 70 human rights due diligence with over 104 technology companies operating in or providing services to the occupied Palestinian territories and or Israel, and only three companies got back to us in detail actually responding to the questions of the survey, making it nearly impossible to actually determine if and how heightened human rights due diligence is actually occurring at all in a context that has long been exposed to conflict related risk. We have also reached out to a number of tech companies in the Middle East. We have also reached out to a number of tech companies in the Middle East. An astonishingly low 4% response rate from companies is unprecedented in the resource center’s history, and previously we had sent a similar survey to tech companies that were operating in Russia and Ukraine, and 26% had responded by comparison, and, of course, both of these numbers are abysmally low, 26% and 4%, which means we need more transparency about what is happening in around the world, where the conflict is taking place and what the impact is looking like upon Afghanistan. One thing I will note is that USAID and hundreds of oil companies are working together. In Afghanistan, so for instance, with coal companies, we have no threat to Iran. We have no threat to earthquakes or lagoons. We have no threat to Gaza or 2006. One other thing I will notice is that companies are profiting from conflicts but they are exacerbating them, further hurt-earning harms. We don’t have a handful of instances where we have seen corporate accountability for aiding war crimes and crimes against humanity playing out in courts and boardrooms, whether it’s convictions or sanctions and they are really rolling back their collectively to uphold their principal principles. We’re going to talk about a few of these. and Dr. Pichamon Yeophantong. We are excited to be here today. We have heard a lot of things today such as governments placing export controls on companies that are selling dual-use tech to maligned actors. There’s currently a stakeholder resolution before Alphabet requesting that the company carry out heightened human rights due diligence regarding its operations in conflict zones. And there are a number of Norwegian investors, some of which borrowed laws due diligence from all Western governments over their misconduct in relation to conflict in international law. And while these examples are incredibly important and noteworthy, and hopefully we surface even more examples during this discussion today, these should not be the exception. Guaranteeing that tech companies are not involved in breaching international humanitarian law should be the minimum requirement. And for the tech sector, we have yet to set a strong enough precedent for accountability, and therefore it is not possible to say that no major tech companies or executives have been criminally convicted for violating international humanitarian law, although there is mounting evidence, and as we know, tech companies are not neutral actors in many conflicts. So we’re going to spend the rest of our time today, about 50 minutes, discussing this topic together and diving into more about what’s needed for greater corporate accountability for the tech sector’s largely unchecked and increasingly powerful and pervasive role in conflict. So we can start off with our expert interventions, and then we’ll open up to the floor to talk a bit further together. So first I’ll start off all the way to my left with Marwa, asking her to kick us off. Can you reflect a little bit more on different ways in which tech companies have been involved in conflict, and have we seen any meaningful positive change in regard to corporate behavior in response to civil society or regulatory pressure?


Marwa Fatafta: Thank you very much, Meredith, and thanks, everyone, for attending this session. I will start with the point you ended with, to emphasise on the fact that tech companies are never neutral actors in situations of armed conflict. They exacerbate the dynamics of the conflict and sometimes even drive them or fuel them, particularly in contexts where there are asymmetries of power between warring parties. They can facilitate human rights abuses or even in some cases contribute to atrocity crimes. I have been primarily focused on the unfolding genocide in Gaza over the past year and a half. Most of my examples will derive from this particular context, which is important because in some ways it might be a foreshadow to the future of cyber warfare and the involvement of tech companies. I will expand on that in due course. I can summarise the ways in which tech companies have been involved in conflict in three notable patterns. Firstly, tech companies can be responsible for directly causing adverse human rights impacts that undermine or violate people’s rights, including the right to freedom of expression, the right to a peaceful association and assembly, the right to bodily security, non-discrimination, among other rights encoded and enshrined in the International Covenant on Civil and Political Rights, and also economic, social and cultural rights. An example of that is what you mentioned with regards to censorship by social media companies and the systematic removal of Palestine-related content online. The second trend is that some companies indeed can contribute to adverse impacts via third parties, such as, for example, the Israeli government or another military. Companies provide direct technological assistance, products, services in the context of Gaza. This includes cloud computing services, AI tools such as LLMs, facial recognition technologies, among others, which have been linked to egregious violations of international law, gross human rights abuses, crimes against humanity, war crimes, and possibly the crime of genocide, which is pending before the International Court of Justice. And here I want to mention maybe just a few examples. We know, for instance, that Google and Amazon, they have this $1.2 billion project providing a national cloud service or infrastructure to the Israeli government called Project Nimbus. During the war, the start of the war, those services have surged in demand. So we know that Google has deepened its business relationship with Israel, particularly with the Ministry of Defense in March 2024, and provided them with a landing zone. They even created, according to media reports, a classified team composed of Israeli nationals with security clearances specifically tasked with receiving sensitive information from the Israeli government and also to provide specialized training with government security agencies and participate in joint drills and scenarios tailored to specific threats. Amazon Web Services also had provided, according to media reports, provided Israel’s military intelligence with server farms that allow for endless storage for mass surveillance data that Israel has gathered on almost everyone in the Gaza Strip. And beyond supplying cloud infrastructure, according to media investigations, they have, on occasion, Dr Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Meredith Veit, Jalal Abukhater, Dr. This is just to demonstrate that the services they provide are substantial in nature, indicated by the surge and high demand that we’ve seen across these different companies. And also another point that I would like to mention is that this is not only about providing technological support, but actually providing human resources, trainings, joint exercises, for example, some leaked documents from Microsoft had shown that the Israeli Ministry of Defense had purchased approximately 19,000 hours of engineering and consultancy services from Microsoft. And Microsoft teams had provided on-site, so on military basis, as well as assistance, and remotely to the IDF, including to units called Unit A200 and Unit 9900, which are notorious for surveillance and military surveillance in particular. A third trend or way in which the companies or tech companies can or are involved in armed conflict is sometimes the companies can contribute to adverse impacts in parallel with a third party, in this case, you know, with the Israeli government or the military, leading to cumulative impacts. For instance, a number of tech companies have mirrored Israel’s state policy of apartheid and segregation in the way they provide or prohibit or withdraw their services to the Palestinians. One clear example of that is, for instance, Google Maps, which if you use it in the West Bank or in the occupied Palestinian territories, it only treats you as if you are an Israeli settler. So you’re only given roads and maps that connect Israeli settlements, but you’re not given any roads between, for example, Palestinian towns or villages, putting people at direct risk, safety risk. PayPal is another interesting example, where if you are an Israeli settler living in an illegal settlement, you can access, for example, PayPal financial services. But if you’re a Palestinian, you’re deprived of that service, which contributes to the development of the Palestinian communities and economy, something that has been written about by the World Bank and other UN agencies, showing again the degree in which these tech companies, by depriving or refusing to provide their services for whatever reason, to communities that are going through a situation of military occupation, for instance, can contribute to the cumulative impact of the occupying power or the policies that they have enshrined in that area. To your question, have we seen any positive shifts? Quick answer, no. Unfortunately, and particularly in the Palestine context, the survey that you have shared in the beginning is really an experience that we share ourselves. We have written letters and have tried to meaningfully engage with tech companies to point out, first to point and show evidence, a body of evidence of the harms that they are directly or indirectly contributing to. But most of those attempts have gone unproductively. We haven’t gotten any productive answers from the companies with regards to their conduct, or, for instance, they were unable to even answer very simple questions such as have you conducted a heightened human rights due diligence in order to mitigate and identify and mitigate such harms? There’s zero transparency with regards to that conduct. But also, even when they do succumb to pressure, for example, Microsoft had recently issued a statement after a year and a half of public mobilization, not only from civil society, but particularly from their tech workers, in which they said, well, we conducted an audit to see whether our technologies have contributed to harm or targeting of civilians in the Gaza Strip. And while we don’t have an insight into how our technologies are used by Israel, especially in air-gapped military bases, we concluded that we have not contributed to any harm. And that contradiction in itself shows you how even UNGPs have, you know, when companies say we’re going to do a heightened human rights due diligence, that results in a box-sticking exercise where they really don’t have any insight or ability to control how their technologies are being used. Finally, I do want to end on a note that I think quite the opposite of what, where or where we want to see companies going. There’s a surge in increasing militarization of civilian tech provided by those companies. For example, both Google and OpenAI have both quietly dropped their voluntary commitments earlier this year not to build AI for military use or surveillance purposes, signaling their readiness to deepen their ties with the arms industry. Within a month of amending its AI principles, Google signed a formal partnership with Lockheed Martin, one of the biggest defence contractors in the world. Open AI, which maintains a non-profit status, also announced a partnership with a defence tech company called Anduril to deploy its technologies in the battlefield. Anduril and Meta are also partnering to design, build and field a range of integrated virtual reality products for the US military. And last week, there was a very disturbing announcement that senior executives from high-tech firms, specifically Meta, Open AI and Palantir, are joining the US Army Reserve at a new unit called Executive Innovation Corp. In the rank of lieutenant colonels to provide tech advice for the US military. So there we see a trend where not only tech companies are providing militaries with their services, but are actually possibly even combatants or taking an active role in the military, which has implications that Chantal maybe can…


Meredith Veit: This is a perfect segue. Thanks, Marwa. Chantal, can you tell us a little bit more about amongst these blurred lines and confusing commitments and changing the definition of what actually is a tech company, what is a defence company and are they one and the same now? What does the hard law actually tell us when we’re talking about international humanitarian law and what friction exists in applying IHL to companies and how does this relate to other existing frameworks that should be helping to guide states and companies like international human rights law? Thank you, Meredith, and thanks everyone for joining.


Chantal Joris: I will try to unpack some of these questions a little bit. I think there’s a few interesting developments also that Marwa mentioned. One thing I remember, I believe it was in Washington Post, where there was the the headline of the the social media company Meta is providing these like virtual reality products for For the US military and I was like well Is it then still really a social network like these these companies are clearly merging into something? So so much bigger and that has obviously been going on for a long time and and also when we look at this increased integration into the military Legally speaking, I think it also raises some quite significant questions around Attribution. So we think about you know, where does it sit and does it become a state obligation if a Meta executive? Operates within the military then again, okay He might be a combatant but also it gives rise to to state obligations directly So I think yeah There’s a lot of questions that that are based on the facts that that become Become more merged and and become more complicated to answer Maybe I will first talk a little bit about the the legal obligations and then about thoughts when it comes to enforcement and I think we need to look at so both international humanitarian law and human rights law because in a sense they They are different in their application and although they’ve both applied during armed conflicts, but also in that sense they there’s different opportunities for For enforcement and accountability, I would say So for example when when it comes to humanitarian law, of course It starts applying in in times of armed conflict non-international armed conflict international armed conflict And it’s actually more sort of used to apply to actors that are also not state actors So be like non non state armed groups, but also to individuals and and applying it to companies It wouldn’t necessarily directly apply to a company but it does apply to individuals that operate within a company When those business activities have an access to an armed conflict and as we’ve heard now Traditionally speaking, you might be thinking more of a mining company that is on the ground or a private military security company that is on the battlefield. But with these technology companies, they might have seemed a bit further removed, but they are increasingly so closely intertwined with how these battles are fought with the military that I think there is, in most cases, in many cases, obviously always context dependent and so on. But the relatively easy point to make that there is this nexus to an armed conflict. And so that means that the staff in that sense also would have to, would also be bound by IHL. And at the same time, human rights law is, we have this very famous soft law instrument of the UN Guiding Principles on Business and Human Rights, which more prominently speaks about human rights and humanitarian law. But in terms of hard law, you would be, it’s more established as long as we don’t have a business and human rights treaty, international treaty, it’s of course more established for state actors. And that also translates in a sense into a bit of a difference when it comes to enforcement. So when you look at enforcement and accountability, in terms of humanitarian law, you will primarily think about international criminal law. Or let’s say, I mean, I think some people in the humanitarian sector would disagree because they say enforcement is not only litigation enforcement, but I speak specifically about legal obligations and litigations in that sense. There as well on the international level, you might be able under some circumstances to go to the International Criminal Court. We’ve seen recently the ICC is in the moment, is at the moment drafting a policy on, the office of the prosecutor of the ICC is drafting a policy on how cyber innovation… cyber conduct might fall under the Rome Statute. And there, you know, at the ICC, you might not have legal persons as such that can be accountable or criminally liable. But corporate executives, in theory, under the very, very high thresholds that are under the Rome Statute could be liable under international criminal law. So that’s on the international level, although probably we still have a bit of a way to go until this is an actual realistic prospect that those are the cases that are actually brought by the prosecutor. Let’s see, let’s hope, let’s hope not. On a domestic level, I think not only with respect to tech companies, generally speaking, sort of liability for war crimes for corporate executives is not something where we have a huge amount of case law. There is the Lundin Oil case, where we talk about corporate executives potentially being liable for aiding and abetting war crimes. There’s also the Lafarge case, where it was really the company’s liability itself that was at stake. But so these cases are becoming more prominent, but so often it really depends on the domestic framework. Do we have something like universal jurisdiction enshrined in the domestic criminal code? Is there, again, corporate, potential corporate criminal responsibility, or does it only have to go through the individuals? And depending on that, you know what evidence you might need to provide, you know what legal grounds you need to prove, and you can start building a case. Just to finish up, also, of course, human rights instruments in some domestic jurisdictions, there can be a domestic legal obligation, in a sense, to conduct human rights diligence, might include heightened human rights due diligence as well. We have also seen cases being brought against companies against the parent companies in the UK over some of their operations in Zambia. I think that is something that we can learn. How did they bring these cases? What evidence did they bring? How can we translate this into the tech sector that is more opaque, probably more intransparent than other sectors that we might be used to?


Meredith Veit: And this is a great setup for how we’re going to open up the floor to questions soon. We’re going to discuss what kind of evidence is needed and how can we work more collaboratively to get there. When it comes to investors, there may be different criteria about what constitutes evidence for investor action. What kind of information do investors typically need? Or at what point can investors actually act when it comes to portfolio companies being implicated with regards to allegations of potential violations of international law? Are there any examples that you can talk about from your experience with KLP?


Kiran Aziz: Thank you very much. And not least for having this conversation, which is much needed. Unfortunately, I thought I’d just start by just very briefly give an introduction about how we work as investors and what kind of opportunities we have and not at least limitations. And I think what I will say is mostly the case for a lot of the large institutional investors. And, you know, if you look at most of the international investors, they have a long-term perspective because they are investing people’s pension and saving money. And when you take a long-term perspective into account, it’s just not about the financial returns, but it is also some material risk which lies in a company. And this is where you’re trying to embedding respect for human rights into. Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Meredith Veit, Jalal Abukhater, Dr. Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Meredith Veit, Jalal Abukhater, Dr. Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Meredith Veit, Jalal Abukhater, Dr. Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Meredith Veit, Jalal Abukhater, Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Meredith Veit, Jalal Abukhater, Dr. These practices are coming into a legal obligation, such as in Norway we have a Transparency Act which demands investors to perform due diligence on their investments. And how do we know that there is a risk within a company? Well, you know, we as investors, we rely on information which is in the public domain. And this could be information, of course, from companies reporting. But when it comes to human rights, especially in conflict areas, you would see that there are… I wouldn’t say the company’s resources or the information or reporting from the companies is a really helpful tool. It’s mostly coming from the civil society. And this is where we, when we engage on this topic, it’s just not about engaging with companies but it’s also really vital to engage with vital stakeholders. And the UN High Commission on Human Rights and such as yours have done a really, really great job in very often conducting reports which tells that companies are present in some conflict areas where they have direct involvement. And this is where it is challenging because, as you said, there are about 110 armed conflicts. And it’s not necessary that we will have an exposure to all of these. There will be few such as the war in Gaza, West Bank, Myanmar, Yemen, Sudan to mention some. Where we see the companies play a vital role compared to a lot of other conflicts of war which have been in the presence such as in Afghanistan and so on. But again, I think our challenge as investors is really much about getting information which can link what’s happening of human rights violations up to the company’s contribution. And this is where we struggle and we really need help from vital stakeholders. and others. And just not the companies. And that’s one thing. And the other thing is if we often look at UNGPs, you would see that performing high due diligence is a really core perspective or the core tool, which we are also trying to implement in our investments. And when you are a passive investor, let’s say you invest in index funds, normally companies become part of your portfolio and then you perform a kind of due diligence later on. But what we have done now, given that it has been so much focus on UNGPs, is that we have started to screen companies up front. We did that when Saudi Arabia came as a new market in our portfolio. Then we conducted companies up front before we decided to invest. And also because due to Saudi’s involvement in the war in Yemen. And you know, we have traditionally been used to work with business models, which are quite traditional. The expectations which are there, the standards which are there, are very much tailored for traditional business, arm companies and so on. But you see that the tech companies are playing a much more important role. And we are really struggling, we as investors, to get an engagement with the companies at all. If there is an engagement, it’s very often that they would just like to give reference to their policies and they are not really interested in discussing the concrete matter. That’s one thing. And the other thing is also what we are seeing that, I would say, we as investors, you know, we have some tools, but they are limited. Because we really depend on the governments that they take the responsibility. I think this is a development which is sadly in a direction where less and less government parties are taking their responsibility. And most of the responsibility is left to the investors and business communities. And when it comes to… conflict areas we we try as much as we can but I have to say that especially when it comes to tech companies and I think it’s just regardless of KLP but most of investor we engage with that we are struggling with getting an engagement with the company at all and then you sit there you have information conducted from some of the civil society which says that the companies might contribute and then you are not able to get engagement with the companies well you know I would say that the proof of burden would be on the company side because we have given them a really fair chance to to express why their involvement should be seen as a contribution human rights violation or not and if they are not responding to our our queries then exclusion would be or or first choice it’s not that you know we would like to do that but I think this is where it’s really important that we even exclude a company at least for KLP we give a quite thorough exclusion document and this is a way to hold companies accountable but this is also to help other investors to get insight about where we draw the line between what is acceptable and not acceptable and I think this is also a way to try to put the companies on agenda and their contribution and we have seen we excluded you mentioned in your introduction we have excluded a lot of companies and I think it’s just that we are transparent about the exclusion which has really led that more investors have followed our exclusions and I think it’s also important for the companies to understand that you know if they improve their practices they can be re-included so I think I would stop there and then we can I’m happy to address the questions you might have.


Meredith Veit: And KLP adopts this very important best practice that not all investors do which as you mentioned is this public list and explanation as to why clearly citing the human rights harms or concerns with relation to international law which signals to the rest of the market right that in order for funds to flow, you have to respect investor principles and policies with regard to heightened due diligence. So we have about 20 minutes left. So now with the help of our tech colleagues, we’ll put a discussion question up on the board, and we’ll open it up to all of you. We want to dive a little bit deeper into this aspect of accountability and enforcement, which we’ve heard various explanations as to from different angles, we have different opportunities, different challenges. So the question for all of you is, what kind of evidence would lead to stronger enforcement actions against tech companies that facilitate violations of international humanitarian law, international law in times of conflict more broadly? And we can consider this from the different angles that we’ve tried to open up discussions about, from the different enforcers that have leverage over tech companies are states primarily who need to abide by international law according to their obligations. We have investors who also have responsibility to use their leverage with regard to tech companies within their portfolios according to the UN guiding principles. And we have courts, you know, strategic litigation is going to be a very key phrase for the coming five plus years, perhaps, in order to really push for accountability where we can. So with that, you know, feel free to come up to the microphones on the side of the room. We’d like to hear from all of your experiences or ideas or reflections based on what we’ve just heard as to what kind of evidence, what else do we need in order to start pushing for more enforcement for this sector? Any brave souls out there? Or anyone online as well? I know we have some people tuning in online.


Audience: My name is Annette Esserhausen, I work partly with the Association for Progressive Communications and actually we’ve started a best practice forum in the Internet Governance Forum that’s actually also looking at some of these things. I think the kind of evidence that is really useful is the impact, the stories of what the results are. I think there are many people in the investment digital justice space who might not be aware that this is not just sort of bad actions or irresponsible behavior, it actually can affect people’s lives. So I think certainly the stories of what the impact is. And then I think the other kind of evidence which is also important for civil society is what are the relationships? And I know this can be quite difficult to gather this kind of evidence, but in the way that Marwa was revealing partnerships of Lockheed Martin by different companies, I think that’s very useful for us as well. I think it’s important to understand how those corporate actors operate, not just in relation to one particular conflict, but what is the ethical backing or lack of it in how they form partnerships and make decisions about what they do where. So I think different types of evidence. It definitely, I think, could make it easier for us to try to hold them accountable. And please do, maybe Marwa can say a little bit about when it is as well. People that are interested in this topic, come to the Best Practice Forum meeting, which will be later this week, on Thursday. I think that’s the right time. Thanks for the panel. Very good.


Meredith Veit: Thursday at 2 p.m. Yes, at 2 p.m.


Audience: Hi, my name is Monika Ermert. I’m a reporter. I understand it’s difficult to engage with the companies. Is there any pathway from using governments to get them to engage? And then, did you try to engage with them to come here and to sit on that panel? Did you reach out to them? It would be interesting to know.


Kiran Aziz: No, we haven’t reached out to them for this particular forum. But we have, if you look at a lot of the tech companies, I think if they are placed within the EU, it would be easier to engage with the governments. And we try to use that. But if you see that Meta, Amazon, all of them, their headquarters are in the US. And I think it’s very much linked up to the current administration and how they are perceiving. And I think it’s just that international law as such, everything is being challenged at this time. So I think it’s very much, I think if you engage with the companies, we saw that it was at least before this current administration, some of them did an effort, but now it’s just like, you know, there are no boundaries for them. And I think it also has to do because they know there is lack of accountability from several places and there aren’t anybody to hold them accountable. And even if, I think we as investors, we exclude these companies, I think the vital part here is that these companies have so much influence that I think there is, even if it’s a really, really difficult path, but I think it’s really important that we and civil society, that we are still there and chasing them, even if they don’t want to engage. Because we have also heard internally from some of the voices which are internally companies that it helps that investors knock on the door.


Marwa Fatafta: I just want to quickly add that, yeah, the fact that we’re talking about mostly and predominantly US-based companies, the Trump administration is taking an extremely protectionist approach to their tech sector. They see it as a sector that needs to be protected against regulation and accountability and particularly from the EU or in fact any other state that may use its national jurisdiction to oblige companies to take one course of action or the other. For example, it caught my attention a couple of weeks ago, an announcement, I think from the State Department saying that they will not grant visas to foreign officials who have mandated quote-unquote censorship by these companies and I think they were mostly referring to the Brazilian judge of the Supreme Court which, you know, mandated X to take certain action against accounts that are spreading disinformation, if I remember the case correctly. So in such a context, yes, exactly, what do you do? Have we engaged with the companies we have at every opportunity in turn, not necessarily for this panel, because we know what would be the outcome, they will come here, rehash the same press lines and leave the room. So we can save you and save ourselves.


Meredith Veit: And something that we’re seeing with regard to power imbalances at play, you know, at the IGF with so many states here, I think we’re also hearing a strong call from all of us in different words that we need states to take tech regulation much more seriously. At the Resource Center, we’re constantly reaching out to companies about allegations of harm for a number of different sectors. And the tech sector consistently has a lower response rate than others like mining or oil or garments. And we think that’s because there’s less regulation. So there’s less pressure. There’s been more of a buildup for different sectors and industries over time with relation to business and human rights and states taking action. And for the tech sector, I mean, we see it now within the United States, for example, saying that they want to put a ban for the next 10 years on regulating AI and calls to try to deregulate in the EU as well. So if we want companies to actually give us the transparency that we need for our societies, we need governments to mandate human rights due diligence and transparency, as we’ve heard.


Chantal Joris: Yeah, I mean, one thought is also it is also about government transparency, because a lot of times we hear that there are direct service contracts procurement, which is very much a service, again, provided to a government. If you look at freedom of information, access requests, often you would have like a national security exemption. So even starting with and often for litigation, for example, what you need is really the hard facts, you need the contracts, you need to close what exact service provision, what did they say about potential violations of international law, you would… even need internal minutes, was the executive aware of certain risks and so on. So you really need, of course, I read the impact, of course, to prove the impact that you need this public reporting by civil society. But you also need a lot of information that unfortunately relies often on whistleblowers, on journalists, but again, looking at the fact that there’s many governments that attend IGF, I think government transparency is also a very, very good point to start. Of course, again, questions around defense, national security are prone to find justifications in secrecy due to national security concerns, but it’s where accountability efforts can start and where we can also measure whether states are upholding their own international obligations.


Meredith Veit: I saw we had someone coming up to the microphone here as well. Yeah, please. Hi, everyone. Audrey Moklay from Open Mic.


Audience: Thank you so much for your panel. My question, and you kind of just touched on it, was around to what extent there’s a defensive due diligence for these corporate executives or the company itself in international human rights law, and to what extent you would need that kind of evidence of the due diligence being done on the corporate side, and how much detail you would need it, because I think we’ve seen in our engagements with certain companies, they won’t even tell us who they hired as a third party to do the assessment. They’re very opaque about how the due diligence is being done. Yeah, and so I guess my question for you is, what’s the burden of proof there, and how do we place it onto the companies? Is it through the investors? Is it through the states? Thanks.


Chantal Joris: I mean, around the burden of proof, again, to sound boring, this is always, I think, generally speaking, very much on the domestic law. What sort of litigation do you bring? Is it a tort law? Is it a criminal complaint? So I think that would depend, maybe. And also, again, the burden of proof, the granularity of the information that you have. I will say in some jurisdictions, notably UK, US, there is also the possibility of making the disclosure request, right, where the judge can then order also the company to disclose certain information that is necessary to be able to actually adjudicate the claim properly. So again, I think one needs to be really quite granular and creative and really think about all the different means you might have to be able to collect what you need for the case. And then again, it will depend on the exact legal basis, what exactly the chain of causation that you might have to prove and how you can access that information as much as possible. But it remains a challenge, which is a reality, of course. I can say from investors’ point of view that we work on a risk-based perspective.


Kiran Aziz: So for us, we need to know that there is a risk. And then this is why we need to assess how high is the risk element. So I would say the bar is lower compared to if you follow, if you have to have a litigation in the court. But when that said, I think it’s still important that the evidence which is coming or the reports which are conducted, that they are done by trustworthy actors which we can rely on. And I think if more actors would emphasize the same risk, then it gets really clear for us that this is something we need to take into account.


Marwa Fatafta: Just to add on what you said, I mean, in the absence of legally mandated human rights due diligence, if we follow the UNGPs, it’s a risk-based approach as well, right? So when you have heightened risk because of armed conflict, of companies contributing Dr. Pichamon Yeophantong, Kiran Aziz, Phillipe Stoll, Meredith Veit, Jalal Abukhater, Dr. to their content moderation of Palestine-related content in 2021 that was a strong push from civil society and also their oversight board to see whether their actions had resulted in violating human rights which in the beginning of that engagement, the company pushed back very strongly against taking that approach and saying, we will internally investigate. I think that’s the company’s favorite phrase to say. We’re, you know, we’ll take care of that. So they are not independent, they’re not, we don’t also know who they’ve talked to because some decisions made by companies, you know, for instance whether to build a data center in an extremely suppressive and authoritarian state like Saudi Arabia where the company said, oh, we’ve done our human rights due diligence but if the outcome is to say, green light this project then there are of course huge question marks raised about that exercise meaning what kind of questions have they asked? What risks have they interrogated and scrutinized? And who are the people they’re talking to? Are they talking to the rights holders? Are they talking to the impacted communities? Or are they talking to some international NGOs that are not directly linked or really understand the context? Which makes these types of, I would even call that it’s watering down what UNGPs were supposed to achieve ultimately.


Meredith Veit: Okay, I’m given the four minute signal. Any other inputs from the audience? Yes.


Audience: Hi, thank you so much. My name is Sadhana. I just had a quick question on the enforcement front. We heard from the panelists about the relevance of international criminal law and IHL as well, but in situations like in Palestine at the moment when genocide is so inextricably linked to armed conflict, I wanted to know whether the Genocide Convention imposes any additional duty on private actors and companies to act positively to prevent genocide and whether there are any enforcement lessons from that convention that might also help us understand how corporate accountability might function where genocide happens in the context of an armed conflict.


Chantal Joris: So, good question. So I think the Genocide Convention, as you say, of course, there can be a risk to genocide happening or genocide might already be happening, but the Genocide Convention’s obligations are triggered well before we have established whether a genocide is happening in the context of an armed conflict or outside of it. And there is there the state’s obligations, of course, to ensure also that no one under their jurisdiction would be contributing to genocide or incitement to genocide and so on. I’m not sure. So I wouldn’t say it’s a direct legal instrument that you can, because it’s a state treaty, that you can at the international level at least at least base yourself on to hold companies accountable. Of course, again, many states have in their domestic legal frameworks also established genocide crimes against humanity, war crimes as crimes and connected with potential universal jurisdiction clauses. They might be able to pursue companies under those provisions. As I mentioned, as far as I’m aware, I think it’s been more war crimes-based complaints, criminal prosecutions, or for crimes against humanity. I’m not sure I’m aware of a corporate executive or a company directly facing genocide charges, let’s say, recently post-World War II. There was also the Rwanda Tribunal, of course, and so on. But then we talk again more about individual criminal responsibility. But still, I think learning from the cases brought in in other sectors, and as you say, under crimes against humanity provisions as well, is definitely something that we should do if we seek to look at strategic litigation, also in the tech sector, I would say.


Marwa Fatafta: The genocide convention, I think, criminalizes… I think the genocide convention criminalizes complicity in genocide, which I think in the Rome study outlines what the modes of liability are.


Chantal Joris: But basically it says the state should, on the domestic level, criminalize complicity in genocide. Exactly, so that’s what you need to…


Meredith Veit: And in looking across different sectors too, I mean, seeing what exactly was it that had the Dutch businessman who was selling chemicals to Saddam Hussein’s regime, what exactly was it about the sharing of information about individuals with the Argentinian regime, with the Ford Motor Company case? What are these pieces that we could take from case law and other previous jurisprudence? And actually apply it to tech, because in sharing names and personally identifiable information, this can translate to sharing names and biometric IDs in the modern context. So we are definitely out of time at this point. So I will just thank our fantastic panelists and for everyone who participated in the audience. Hopefully this served at least as a launching point to spark some ideas and get more people involved in thinking about this. Because as you can see, there’s a lot of work to be done from all angles. So thank you all so much for your time and for your interventions today. I appreciate it. Thank you. Thank you. Thank you. Thank you. Thank you.


M

Marwa Fatafta

Speech speed

138 words per minute

Speech length

1932 words

Speech time

838 seconds

Tech companies are never neutral actors in armed conflicts and can exacerbate conflict dynamics through power asymmetries

Explanation

Fatafta argues that tech companies actively participate in and worsen conflicts rather than remaining neutral observers. They particularly impact situations where there are power imbalances between warring parties, potentially facilitating human rights abuses or contributing to atrocity crimes.


Evidence

Examples from the Gaza conflict over the past year and a half, which she describes as potentially foreshadowing the future of cyber warfare and tech company involvement


Major discussion point

Tech Companies’ Role in Armed Conflicts


Topics

Cyberconflict and warfare | Human rights principles


Agreed with

– Chantal Joris

Agreed on

Tech companies are actively contributing to conflicts rather than remaining neutral


Companies directly cause adverse human rights impacts through censorship and systematic removal of Palestine-related content

Explanation

Tech companies violate fundamental rights including freedom of expression, peaceful assembly, and non-discrimination through their content moderation policies. This represents direct harm caused by corporate policies rather than indirect contribution to conflict.


Evidence

Systematic removal of Palestine-related content online by social media companies, violating rights enshrined in the International Covenant on Civil and Political Rights


Major discussion point

Tech Companies’ Role in Armed Conflicts


Topics

Freedom of expression | Content policy | Human rights principles


Tech companies contribute to violations through third parties by providing cloud computing, AI tools, and facial recognition technologies to militaries

Explanation

Companies provide technological infrastructure and services that enable military operations linked to serious violations of international law. This includes direct technological assistance to governments and military forces engaged in conflicts.


Evidence

Google and Amazon’s $1.2 billion Project Nimbus providing cloud services to Israeli government; Google’s deepened relationship with Ministry of Defense including classified teams and joint drills; Amazon Web Services providing server farms for mass surveillance data; Microsoft selling 19,000 hours of engineering services to Israeli Ministry of Defense including on-site military base assistance


Major discussion point

Tech Companies’ Role in Armed Conflicts


Topics

Cyberconflict and warfare | Privacy and data protection | Human rights principles


Companies mirror state policies of apartheid and segregation in their service provision, as seen with Google Maps and PayPal in occupied territories

Explanation

Tech companies implement discriminatory policies that parallel and reinforce state-level segregation and apartheid systems. By selectively providing or denying services based on identity or location, they contribute to cumulative impacts of occupation and discrimination.


Evidence

Google Maps in West Bank only shows roads connecting Israeli settlements, not Palestinian towns; PayPal allows Israeli settlers in illegal settlements to access services while denying them to Palestinians


Major discussion point

Tech Companies’ Role in Armed Conflicts


Topics

Human rights principles | Digital access | Consumer protection


Companies fail to conduct meaningful heightened human rights due diligence despite operating in high-risk conflict zones

Explanation

Despite UN Guiding Principles requirements for enhanced due diligence in conflict areas, tech companies either refuse to engage with civil society concerns or conduct superficial audits. When they do respond to pressure, their due diligence processes lack transparency and meaningful oversight.


Evidence

Microsoft’s audit after public pressure concluded no contribution to harm despite admitting no insight into technology use in air-gapped military bases; companies unable to answer basic questions about due diligence processes


Major discussion point

Corporate Accountability and Due Diligence Failures


Topics

Human rights principles | Legal and regulatory


Agreed with

– Meredith Veit
– Kiran Aziz

Agreed on

Tech companies consistently fail to engage meaningfully on human rights due diligence and transparency


Disagreed with

– Kiran Aziz

Disagreed on

Effectiveness of investor exclusion as accountability mechanism


Even when companies claim to conduct audits, they lack insight into how their technologies are used, making due diligence ineffective

Explanation

Companies perform box-ticking exercises rather than genuine due diligence, admitting they have no visibility into how their technologies are actually deployed by military clients. This contradiction undermines the entire premise of effective human rights due diligence.


Evidence

Microsoft’s statement that while they don’t have insight into how technologies are used in air-gapped military bases, they concluded no contribution to harm


Major discussion point

Corporate Accountability and Due Diligence Failures


Topics

Human rights principles | Legal and regulatory


Major tech companies are quietly dropping voluntary commitments against building AI for military use and forming partnerships with defense contractors

Explanation

There is an increasing militarization trend where tech companies are abandoning their previous ethical commitments and actively seeking military partnerships. This represents a fundamental shift toward embracing rather than avoiding military applications of civilian technology.


Evidence

Google and OpenAI dropped commitments not to build AI for military use; Google signed partnership with Lockheed Martin; OpenAI partnered with defense tech company Anduril; Meta and Anduril partnering on VR products for US military


Major discussion point

Militarization of Tech Sector


Topics

Cyberconflict and warfare | Future of work


Senior executives from Meta, OpenAI, and Palantir are joining US Army Reserve as lieutenant colonels, blurring lines between civilian tech and military roles

Explanation

The creation of the Executive Innovation Corp represents an unprecedented integration of tech executives directly into military command structures. This development raises fundamental questions about the distinction between civilian technology companies and military actors.


Evidence

Senior executives from Meta, OpenAI, and Palantir joining US Army Reserve Executive Innovation Corp as lieutenant colonels to provide tech advice


Major discussion point

Militarization of Tech Sector


Topics

Cyberconflict and warfare | Future of work


US protectionist approach under current administration shields tech companies from regulation and accountability measures

Explanation

The Trump administration’s protective stance toward US tech companies creates barriers to international accountability efforts. This includes threatening foreign officials who attempt to regulate US tech companies, effectively creating a shield against external oversight.


Evidence

State Department announcement refusing visas to foreign officials who mandate ‘censorship’ by US companies, referencing Brazilian Supreme Court judge’s actions against X


Major discussion point

Regulatory and Political Challenges


Topics

Legal and regulatory | Jurisdiction


C

Chantal Joris

Speech speed

155 words per minute

Speech length

1752 words

Speech time

674 seconds

International humanitarian law applies to individuals within companies when business activities have nexus to armed conflict, though enforcement primarily relies on international criminal law

Explanation

While IHL doesn’t directly apply to companies as entities, it does bind individual employees when their business activities are sufficiently connected to armed conflicts. This creates potential liability for corporate staff, though enforcement mechanisms are primarily through criminal law rather than corporate liability.


Evidence

Traditional application to mining companies on the ground or private military security companies, but tech companies are increasingly intertwined with military operations


Major discussion point

Legal Framework and Enforcement Challenges


Topics

Legal and regulatory | Cyberconflict and warfare


Agreed with

– Marwa Fatafta

Agreed on

Tech companies are actively contributing to conflicts rather than remaining neutral


Corporate executives could theoretically be liable under ICC jurisdiction, but realistic prospects remain limited due to high thresholds

Explanation

The International Criminal Court is developing policies on cyber conduct under the Rome Statute, which could potentially hold corporate executives criminally liable. However, the extremely high legal thresholds make actual prosecutions unlikely in the near term.


Evidence

ICC prosecutor’s office drafting policy on cyber conduct under Rome Statute; legal persons cannot be held liable but corporate executives theoretically could be under high thresholds


Major discussion point

Legal Framework and Enforcement Challenges


Topics

Legal and regulatory | Jurisdiction


Domestic frameworks vary significantly in their capacity for universal jurisdiction and corporate criminal responsibility

Explanation

The ability to hold tech companies accountable depends heavily on individual countries’ legal systems and whether they have universal jurisdiction provisions and corporate criminal responsibility frameworks. This creates an uneven patchwork of potential accountability mechanisms.


Evidence

Lundin Oil case with corporate executives potentially liable for aiding war crimes; Lafarge case involving company liability itself; UK cases against parent companies over operations in Zambia


Major discussion point

Legal Framework and Enforcement Challenges


Topics

Legal and regulatory | Jurisdiction


Government transparency is crucial as many service contracts fall under national security exemptions, limiting access to evidence

Explanation

Strategic litigation requires detailed evidence including contracts and internal communications, but government procurement with tech companies is often classified under national security. This creates a fundamental barrier to accountability efforts that could be addressed through improved government transparency.


Evidence

Freedom of information requests often blocked by national security exemptions; need for contracts, internal minutes, and evidence of executive awareness of risks


Major discussion point

Legal Framework and Enforcement Challenges


Topics

Legal and regulatory | Privacy and data protection


Agreed with

– Meredith Veit
– Kiran Aziz

Agreed on

Government regulation and mandates are essential for corporate accountability


The integration of tech executives into military structures raises questions about attribution and state obligations

Explanation

When tech company executives operate within military command structures, it becomes unclear whether their actions should be attributed to the state or the company. This blurring of lines has significant implications for determining which legal frameworks apply and who bears responsibility.


Evidence

Meta executives operating within military raising questions about combatant status and state obligations


Major discussion point

Militarization of Tech Sector


Topics

Cyberconflict and warfare | Legal and regulatory


M

Meredith Veit

Speech speed

145 words per minute

Speech length

1816 words

Speech time

749 seconds

Survey response rates from tech companies are abysmally low (4% for Palestine/Israel context vs 26% for Russia/Ukraine), showing lack of transparency

Explanation

A 2014 survey of 104 technology companies operating in occupied Palestinian territories received only 4% response rate, compared to 26% for a similar survey about Russia/Ukraine operations. This demonstrates unprecedented lack of engagement from tech companies on human rights due diligence in conflict zones.


Evidence

Survey of 104 tech companies in Palestine/Israel with 4% response rate vs 26% for Russia/Ukraine survey; described as unprecedented in the resource center’s history


Major discussion point

Corporate Accountability and Due Diligence Failures


Topics

Human rights principles | Legal and regulatory


Agreed with

– Marwa Fatafta
– Kiran Aziz

Agreed on

Tech companies consistently fail to engage meaningfully on human rights due diligence and transparency


Government mandates for human rights due diligence and transparency are essential since voluntary approaches have failed

Explanation

The consistently low response rates from tech companies compared to other sectors demonstrates that voluntary corporate responsibility frameworks are insufficient. Government regulation requiring mandatory human rights due diligence and transparency reporting is necessary to create accountability.


Evidence

Tech sector consistently has lower response rates than mining, oil, or garments sectors; calls for bans on AI regulation in US and deregulation in EU


Major discussion point

Regulatory and Political Challenges


Topics

Legal and regulatory | Human rights principles


Agreed with

– Chantal Joris
– Kiran Aziz

Agreed on

Government regulation and mandates are essential for corporate accountability


The tech sector has lower regulatory pressure compared to other industries like mining or oil, resulting in lower corporate response rates

Explanation

Tech companies face less regulatory scrutiny and pressure compared to traditional industries that have been subject to business and human rights frameworks for longer periods. This regulatory gap explains why tech companies are less responsive to accountability efforts.


Evidence

Tech sector consistently lower response rates than mining, oil, or garments when contacted about human rights allegations


Major discussion point

Regulatory and Political Challenges


Topics

Legal and regulatory | Human rights principles


K

Kiran Aziz

Speech speed

160 words per minute

Speech length

1503 words

Speech time

563 seconds

Institutional investors rely on long-term perspectives that incorporate material risks including human rights violations as financial risks

Explanation

Large institutional investors managing pension and savings funds take long-term investment approaches that consider human rights risks as material financial risks. This creates a business case for embedding human rights considerations into investment decisions beyond just ethical concerns.


Evidence

Norway’s Transparency Act requiring due diligence on investments; screening companies upfront before investment decisions


Major discussion point

Investor Leverage and Limitations


Topics

Economic | Human rights principles


Investors depend heavily on civil society reports and public domain information since companies provide inadequate reporting on human rights impacts

Explanation

Institutional investors cannot rely on corporate reporting for human rights risk assessment, particularly in conflict areas, and instead depend on civil society organizations and UN agencies for credible information. This highlights the critical role of civil society in corporate accountability.


Evidence

Company resources and reporting not helpful for human rights assessment; reliance on civil society reports and UN High Commission on Human Rights documentation


Major discussion point

Investor Leverage and Limitations


Topics

Economic | Human rights principles


Exclusion of companies from investment portfolios can be effective when done transparently with public documentation of reasons

Explanation

Public exclusion lists with detailed explanations of human rights concerns can influence other investors and put pressure on companies to improve practices. Transparency about exclusion criteria helps set market standards and can lead to company re-inclusion if practices improve.


Evidence

KLP’s transparent exclusion documents helping other investors follow similar exclusions; companies can be re-included if they improve practices


Major discussion point

Investor Leverage and Limitations


Topics

Economic | Human rights principles


Disagreed with

– Marwa Fatafta

Disagreed on

Effectiveness of investor exclusion as accountability mechanism


Tech companies are increasingly difficult to engage with compared to traditional sectors, often only referencing policies without discussing concrete matters

Explanation

Unlike traditional business sectors, tech companies are particularly resistant to investor engagement on human rights issues. When engagement does occur, companies typically deflect with generic policy references rather than addressing specific concerns or evidence of harm.


Evidence

Struggle to get engagement with tech companies at all; when engagement occurs, companies reference policies without discussing concrete matters


Major discussion point

Corporate Accountability and Due Diligence Failures


Topics

Economic | Human rights principles


Agreed with

– Marwa Fatafta
– Meredith Veit

Agreed on

Tech companies consistently fail to engage meaningfully on human rights due diligence and transparency


Investor engagement is limited by companies’ unwillingness to discuss concrete matters and lack of government accountability

Explanation

Investors face significant limitations in their ability to influence tech company behavior due to corporate resistance and insufficient government oversight. The burden increasingly falls on investors and business communities rather than governments taking responsibility for regulation.


Evidence

Companies unwilling to engage beyond policy references; governments taking less responsibility leaving burden on investors and business communities


Major discussion point

Investor Leverage and Limitations


Topics

Economic | Legal and regulatory


Agreed with

– Meredith Veit
– Chantal Joris

Agreed on

Government regulation and mandates are essential for corporate accountability


A

Audience

Speech speed

158 words per minute

Speech length

586 words

Speech time

222 seconds

Impact stories showing real-world consequences of corporate actions are crucial for demonstrating harm beyond just irresponsible behavior

Explanation

Many stakeholders in the digital justice space may not understand that corporate actions in conflict zones have life-and-death consequences for real people. Personal stories and concrete examples of impact are essential for making the human cost of corporate behavior visible and compelling.


Evidence

Stories of what the results are and how corporate actions can affect people’s lives


Major discussion point

Evidence Requirements for Accountability


Topics

Human rights principles | Content policy


Corporate relationship mapping and partnership analysis help reveal patterns of ethical decision-making across different conflicts

Explanation

Understanding how tech companies form partnerships and make decisions across multiple conflicts provides insight into their ethical frameworks and decision-making processes. This type of evidence helps establish patterns of behavior rather than isolated incidents.


Evidence

Partnerships like Lockheed Martin relationships; understanding ethical backing or lack thereof in how companies form partnerships across different conflicts


Major discussion point

Evidence Requirements for Accountability


Topics

Economic | Human rights principles


Agreements

Agreement points

Tech companies consistently fail to engage meaningfully on human rights due diligence and transparency

Speakers

– Marwa Fatafta
– Meredith Veit
– Kiran Aziz

Arguments

Companies fail to conduct meaningful heightened human rights due diligence despite operating in high-risk conflict zones


Survey response rates from tech companies are abysmally low (4% for Palestine/Israel context vs 26% for Russia/Ukraine), showing lack of transparency


Tech companies are increasingly difficult to engage with compared to traditional sectors, often only referencing policies without discussing concrete matters


Summary

All speakers agree that tech companies demonstrate unprecedented resistance to transparency and meaningful engagement on human rights issues, with extremely low response rates to surveys and superficial responses when they do engage


Topics

Human rights principles | Legal and regulatory


Government regulation and mandates are essential for corporate accountability

Speakers

– Meredith Veit
– Chantal Joris
– Kiran Aziz

Arguments

Government mandates for human rights due diligence and transparency are essential since voluntary approaches have failed


Government transparency is crucial as many service contracts fall under national security exemptions, limiting access to evidence


Investor engagement is limited by companies’ unwillingness to discuss concrete matters and lack of government accountability


Summary

Speakers consensus that voluntary corporate responsibility frameworks have failed and government intervention through regulation, transparency requirements, and accountability mechanisms is necessary


Topics

Legal and regulatory | Human rights principles


Tech companies are actively contributing to conflicts rather than remaining neutral

Speakers

– Marwa Fatafta
– Chantal Joris

Arguments

Tech companies are never neutral actors in armed conflicts and can exacerbate conflict dynamics through power asymmetries


International humanitarian law applies to individuals within companies when business activities have nexus to armed conflict, though enforcement primarily relies on international criminal law


Summary

Both speakers reject the notion of tech company neutrality in conflicts, with Fatafta providing extensive evidence of active participation and Joris explaining the legal framework that makes individuals within companies liable


Topics

Cyberconflict and warfare | Human rights principles | Legal and regulatory


Similar viewpoints

Both speakers identify and are concerned about the increasing militarization of the tech sector, with companies abandoning ethical commitments and executives directly joining military structures

Speakers

– Marwa Fatafta
– Chantal Joris

Arguments

Major tech companies are quietly dropping voluntary commitments against building AI for military use and forming partnerships with defense contractors


The integration of tech executives into military structures raises questions about attribution and state obligations


Topics

Cyberconflict and warfare | Future of work


Both emphasize the critical importance of civil society documentation and real-world impact evidence for accountability efforts, as corporate reporting is inadequate

Speakers

– Kiran Aziz
– Audience

Arguments

Investors depend heavily on civil society reports and public domain information since companies provide inadequate reporting on human rights impacts


Impact stories showing real-world consequences of corporate actions are crucial for demonstrating harm beyond just irresponsible behavior


Topics

Human rights principles | Economic


Both identify regulatory capture and protection of tech companies as major barriers to accountability, particularly in the US context

Speakers

– Marwa Fatafta
– Meredith Veit

Arguments

US protectionist approach under current administration shields tech companies from regulation and accountability measures


The tech sector has lower regulatory pressure compared to other industries like mining or oil, resulting in lower corporate response rates


Topics

Legal and regulatory | Jurisdiction


Unexpected consensus

Investor exclusion as an effective accountability mechanism

Speakers

– Kiran Aziz
– Marwa Fatafta
– Meredith Veit

Arguments

Exclusion of companies from investment portfolios can be effective when done transparently with public documentation of reasons


Companies fail to conduct meaningful heightened human rights due diligence despite operating in high-risk conflict zones


Survey response rates from tech companies are abysmally low (4% for Palestine/Israel context vs 26% for Russia/Ukraine), showing lack of transparency


Explanation

Despite coming from different perspectives (investor, civil society advocate, moderator), there was unexpected consensus that transparent investor exclusion can be an effective accountability tool when companies refuse to engage, representing a market-based solution to regulatory gaps


Topics

Economic | Human rights principles


The fundamental inadequacy of current due diligence frameworks for tech companies

Speakers

– Marwa Fatafta
– Chantal Joris
– Kiran Aziz

Arguments

Even when companies claim to conduct audits, they lack insight into how their technologies are used, making due diligence ineffective


Domestic frameworks vary significantly in their capacity for universal jurisdiction and corporate criminal responsibility


Tech companies are increasingly difficult to engage with compared to traditional sectors, often only referencing policies without discussing concrete matters


Explanation

All speakers from different expertise areas (advocacy, legal, investment) agreed that existing due diligence frameworks are fundamentally inadequate for the tech sector, which was unexpected given their different professional backgrounds and typical approaches to corporate accountability


Topics

Human rights principles | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated remarkable consensus across multiple critical issues: tech companies’ active role in conflicts, failure of voluntary accountability mechanisms, need for government regulation, and inadequacy of current due diligence frameworks. There was also agreement on the militarization trend in tech and the importance of civil society documentation.


Consensus level

High level of consensus with significant implications – the alignment between civil society advocates, legal experts, and investors suggests a broad coalition for reform. This consensus indicates that the current system of tech accountability is fundamentally broken and requires systemic change rather than incremental improvements. The agreement across different stakeholder types strengthens the case for regulatory intervention and suggests potential for coordinated advocacy efforts.


Differences

Different viewpoints

Effectiveness of investor exclusion as accountability mechanism

Speakers

– Kiran Aziz
– Marwa Fatafta

Arguments

Exclusion of companies from investment portfolios can be effective when done transparently with public documentation of reasons


Companies fail to conduct meaningful heightened human rights due diligence despite operating in high-risk conflict zones


Summary

Kiran Aziz presents investor exclusion as a potentially effective tool that can influence company behavior and help other investors follow suit, while Marwa Fatafta’s examples suggest companies remain largely unresponsive to external pressure and continue harmful practices regardless of accountability efforts


Topics

Economic | Human rights principles


Unexpected differences

Optimism about incremental progress versus systemic failure

Speakers

– Kiran Aziz
– Marwa Fatafta

Arguments

Companies can be re-included if they improve practices


Even when companies claim to conduct audits, they lack insight into how their technologies are used, making due diligence ineffective


Explanation

While both speakers work on corporate accountability, Kiran maintains some optimism that companies can improve and be re-included in investment portfolios, suggesting the system can work with proper incentives. Marwa’s analysis suggests the entire due diligence framework is fundamentally flawed and ineffective, representing a more systemic critique. This disagreement is unexpected because both are advocates for corporate accountability but have different assessments of whether current frameworks can be reformed or need complete overhaul


Topics

Human rights principles | Economic


Overall assessment

Summary

The speakers show remarkable alignment on identifying problems with tech company accountability in conflict zones, but subtle differences emerge in their assessment of potential solutions and the effectiveness of current accountability mechanisms


Disagreement level

Low level of disagreement with high consensus on problems but nuanced differences on solutions. The implications suggest that while there is strong agreement on the need for tech company accountability, practitioners from different sectors (legal, advocacy, investment) may have varying levels of optimism about working within existing frameworks versus the need for fundamental systemic change. This could impact strategy coordination and resource allocation in accountability efforts


Partial agreements

Partial agreements

Similar viewpoints

Both speakers identify and are concerned about the increasing militarization of the tech sector, with companies abandoning ethical commitments and executives directly joining military structures

Speakers

– Marwa Fatafta
– Chantal Joris

Arguments

Major tech companies are quietly dropping voluntary commitments against building AI for military use and forming partnerships with defense contractors


The integration of tech executives into military structures raises questions about attribution and state obligations


Topics

Cyberconflict and warfare | Future of work


Both emphasize the critical importance of civil society documentation and real-world impact evidence for accountability efforts, as corporate reporting is inadequate

Speakers

– Kiran Aziz
– Audience

Arguments

Investors depend heavily on civil society reports and public domain information since companies provide inadequate reporting on human rights impacts


Impact stories showing real-world consequences of corporate actions are crucial for demonstrating harm beyond just irresponsible behavior


Topics

Human rights principles | Economic


Both identify regulatory capture and protection of tech companies as major barriers to accountability, particularly in the US context

Speakers

– Marwa Fatafta
– Meredith Veit

Arguments

US protectionist approach under current administration shields tech companies from regulation and accountability measures


The tech sector has lower regulatory pressure compared to other industries like mining or oil, resulting in lower corporate response rates


Topics

Legal and regulatory | Jurisdiction


Takeaways

Key takeaways

Tech companies are not neutral actors in armed conflicts and actively contribute to human rights violations through direct censorship, providing military technologies, and mirroring state policies of discrimination


Current legal frameworks (IHL and human rights law) can theoretically hold tech companies accountable, but enforcement faces significant practical challenges due to high legal thresholds, jurisdictional issues, and lack of transparency


Corporate accountability mechanisms are failing – tech companies have extremely low engagement rates (4% response rate) and conduct inadequate human rights due diligence despite operating in high-risk conflict zones


The tech sector is becoming increasingly militarized, with companies dropping voluntary commitments against military AI development and executives joining military units, blurring civilian-military distinctions


Investors can leverage exclusion strategies and transparency requirements to pressure companies, but face limitations due to companies’ unwillingness to engage and lack of government accountability


Successful accountability requires multiple types of evidence: impact stories, corporate relationship mapping, hard contractual evidence, and risk assessments from trustworthy sources


Government regulation and transparency mandates are essential since voluntary corporate approaches have proven insufficient – the tech sector faces less regulatory pressure than other industries


Resolutions and action items

Civil society should continue documenting and reporting on corporate relationships and partnerships to reveal patterns of decision-making across conflicts


Investors should maintain transparent exclusion practices with public documentation to signal market expectations and help other investors follow suit


Strategic litigation should learn from cases in other sectors (mining, oil) and apply similar evidence-gathering approaches to the tech sector


Government transparency through freedom of information requests should be pursued to access service contracts and procurement details


Continued engagement with companies is necessary even when they are unresponsive, as internal voices within companies report that external pressure is helpful


Best Practice Forum meeting scheduled for Thursday at 2 p.m. during IGF to continue discussions on these topics


Unresolved issues

How to effectively regulate US-based tech companies given the protectionist stance of the current US administration


What specific burden of proof standards should apply to corporate due diligence and how to enforce meaningful transparency requirements


How to address the attribution challenges when tech executives become integrated into military structures


What mechanisms can compel companies to engage meaningfully rather than simply referencing policies


How to access classified or national security-protected information about government-tech company contracts


Whether existing international legal frameworks are adequate for addressing the unique challenges posed by tech companies in conflict zones


How to establish effective accountability when companies operate across multiple jurisdictions with varying legal standards


Suggested compromises

Risk-based approaches that require lower burden of proof than criminal litigation but still enable investor and civil society action


Combination of hard law enforcement through courts and soft law pressure through investors and civil society engagement


Utilizing both international frameworks (IHL, human rights law) and domestic legal mechanisms depending on jurisdiction and available evidence


Focusing on government transparency as a starting point when direct corporate engagement fails


Learning from successful accountability cases in other sectors while adapting approaches to tech sector specificities


Thought provoking comments

Tech companies are never neutral actors in situations of armed conflict. They exacerbate the dynamics of the conflict and sometimes even drive them or fuel them, particularly in contexts where there are asymmetries of power between warring parties.

Speaker

Marwa Fatafta


Reason

This comment fundamentally challenges the common perception of tech companies as neutral service providers. It reframes the entire discussion by establishing that tech companies are active participants in conflicts rather than passive enablers, which has profound implications for accountability and legal responsibility.


Impact

This opening statement set the foundational premise for the entire discussion, moving the conversation away from whether tech companies should be held accountable to how they should be held accountable. It established the framework for all subsequent examples and legal analysis.


There’s a surge in increasing militarization of civilian tech… both Google and OpenAI have both quietly dropped their voluntary commitments earlier this year not to build AI for military use or surveillance purposes… senior executives from high-tech firms, specifically Meta, Open AI and Palantir, are joining the US Army Reserve at a new unit called Executive Innovation Corp.

Speaker

Marwa Fatafta


Reason

This revelation exposes a dramatic shift in the tech industry’s relationship with military operations, showing how the lines between civilian tech companies and military contractors are completely blurring. The fact that executives are literally becoming military officers represents an unprecedented development.


Impact

This comment created a pivotal moment in the discussion, prompting Chantal to immediately address the legal implications of attribution and state obligations when tech executives operate within military structures. It fundamentally changed the scope of the conversation from service provision to direct military participation.


When you look at enforcement and accountability, in terms of humanitarian law, you will primarily think about international criminal law… corporate executives, in theory, under the very, very high thresholds that are under the Rome Statute could be liable under international criminal law.

Speaker

Chantal Joris


Reason

This comment bridges the gap between theoretical legal frameworks and practical enforcement mechanisms, introducing the possibility of criminal liability for tech executives under international law. It moves beyond civil remedies to criminal accountability.


Impact

This shifted the discussion from corporate responsibility frameworks to individual criminal liability, raising the stakes significantly and introducing new pathways for accountability that hadn’t been previously explored in the tech context.


An astonishingly low 4% response rate from companies is unprecedented in the resource center’s history, and previously we had sent a similar survey to tech companies that were operating in Russia and Ukraine, and 26% had responded by comparison

Speaker

Meredith Veit


Reason

This stark comparison reveals the exceptional resistance of tech companies to transparency and accountability efforts specifically in the Palestine context, suggesting either heightened sensitivity or deliberate avoidance that goes beyond normal corporate non-responsiveness.


Impact

This statistic provided concrete evidence of the accountability gap and influenced subsequent discussion about the need for mandatory rather than voluntary disclosure mechanisms. It reinforced arguments for stronger regulatory intervention.


We as investors, we exclude these companies, I think the vital part here is that these companies have so much influence that I think there is, even if it’s a really, really difficult path, but I think it’s really important that we and civil society, that we are still there and chasing them, even if they don’t want to engage.

Speaker

Kiran Aziz


Reason

This comment acknowledges the limitations of investor power while simultaneously arguing for persistent engagement despite those limitations. It reveals the power imbalance between even large institutional investors and major tech companies.


Impact

This honest assessment of investor limitations prompted discussion about the need for government intervention and regulation, as market-based solutions alone appear insufficient to address the scale of tech company influence and resistance to accountability.


The Trump administration is taking an extremely protectionist approach to their tech sector… they will not grant visas to foreign officials who have mandated quote-unquote censorship by these companies

Speaker

Marwa Fatafta


Reason

This comment reveals how geopolitical dynamics and state protection of domestic tech companies creates barriers to international accountability efforts, showing how corporate impunity is actively supported by state policy.


Impact

This observation shifted the discussion to acknowledge the political dimensions of tech accountability, explaining why traditional engagement strategies are failing and why new approaches are needed that account for state protection of tech companies.


Overall assessment

These key comments fundamentally shaped the discussion by progressively revealing the depth and complexity of the accountability challenge. The conversation evolved from establishing that tech companies are active conflict participants, to documenting their increasing militarization, to exploring legal frameworks for accountability, to acknowledging the practical barriers created by corporate resistance and state protection. The comments collectively painted a picture of a sector that has outgrown existing accountability mechanisms and requires new approaches that account for unprecedented corporate power, state protection, and the blurring lines between civilian and military technology. The discussion moved from theoretical frameworks to practical challenges, ultimately highlighting the need for coordinated action across multiple stakeholders – civil society, investors, states, and courts – to address what appears to be a fundamental shift in how technology companies operate in conflict contexts.


Follow-up questions

What kind of evidence would lead to stronger enforcement actions against tech companies that facilitate violations of international humanitarian law?

Speaker

Meredith Veit


Explanation

This was posed as the main discussion question for audience participation, seeking input on what evidence is needed from different enforcement angles including states, investors, and courts


How can we better understand corporate relationships and partnerships beyond individual conflicts?

Speaker

Annette Esserhausen


Explanation

She emphasized the need to understand how corporate actors operate across different contexts and their ethical backing in forming partnerships, not just in relation to one particular conflict


Is there any pathway from using governments to get tech companies to engage when direct engagement fails?

Speaker

Monika Ermert


Explanation

This addresses the challenge of tech companies’ reluctance to engage with civil society and investors, exploring whether government pressure could be more effective


What is the burden of proof for corporate due diligence and how much detail is needed when companies are opaque about their assessment processes?

Speaker

Audrey Moklay


Explanation

This addresses the challenge of companies not disclosing who they hire for assessments or how due diligence is conducted, questioning how to place the burden of proof on companies


Whether the Genocide Convention imposes additional duties on private actors to prevent genocide and what enforcement lessons can be drawn from it?

Speaker

Sadhana


Explanation

This explores whether there are additional legal frameworks beyond IHL and human rights law that could be applied to corporate accountability in contexts where genocide occurs during armed conflict


How can we improve government transparency regarding service contracts and procurement with tech companies?

Speaker

Chantal Joris


Explanation

She identified the need for better access to government contracts and internal communications with tech companies, as this information is often protected under national security exemptions but is crucial for litigation


What can be learned from corporate accountability cases in other sectors that could be applied to the tech sector?

Speaker

Meredith Veit


Explanation

She suggested examining previous jurisprudence from cases involving other industries to identify applicable legal precedents for tech company accountability


How can civil society better document and present impact stories to demonstrate real-world consequences of tech company actions?

Speaker

Annette Esserhausen


Explanation

She emphasized the need for evidence showing actual impact on people’s lives, not just documentation of irresponsible behavior, to make the case for accountability more compelling


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #466 AI at a Crossroads Between Sovereignty and Sustainability

WS #466 AI at a Crossroads Between Sovereignty and Sustainability

Session at a glance

Summary

This Internet Governance Forum 2025 panel discussion explored the intersection of artificial intelligence sovereignty and environmental sustainability, examining how nations can reduce technological dependency while minimizing environmental impacts. The session was organized by LAPIN, the Sustainable AI Lab at Bonn University, and VLK Advogados, bringing together experts from government, academia, international organizations, and civil society.


Ana Valdivia from Oxford Internet Institute highlighted the environmental costs of AI infrastructure, noting that digital sovereignty is impossible when countries depend on minerals extracted from other nations for AI chips. She cited examples from Mexico where data centers consume water 24/7 while local communities have access to water only one hour per week, demonstrating how AI reproduces climate injustice. Valdivia advocated for “digital solidarity” rather than digital sovereignty to foster collaborative approaches.


Alex Moltzau from the European AI Office emphasized the need for responsible AI deployment within the context of climate crisis, noting that the EU is investing 200 billion euros in AI infrastructure while working on energy reduction standards. Pedro Ivo Ferraz da Silva from Brazil’s Ministry of Foreign Affairs discussed the asymmetry in AI development, where 84% of large language models provide no disclosure of energy use or emissions, and stressed the importance of inclusive international cooperation ahead of COP30 in Brazil.


Yu Ping Chan from UNDP warned that only 10% of AI’s economic value by 2030 will benefit the Global South, emphasizing the need for holistic approaches that address connectivity, skills, and infrastructure gaps. Alexandra Costa Barbosa from Brazil’s Homeless Workers Movement introduced the concept of “popular digital sovereignty,” focusing on grassroots efforts to achieve meaningful connectivity and digital literacy in marginalized communities.


The discussion concluded that addressing AI sovereignty requires tackling multiple interconnected crises—environmental, social, and digital—through coordinated efforts that empower local communities and social movements while ensuring responsible technology deployment.


Keypoints

## Major Discussion Points:


– **Digital Sovereignty vs. Environmental Sustainability Tension**: The panel explored the fundamental challenge of how nations can achieve AI sovereignty and reduce technological dependency while minimizing environmental impacts, particularly given AI’s heavy reliance on minerals, energy, and water resources.


– **Global South Dependency and Digital Colonialism**: Extensive discussion on how AI development perpetuates colonial patterns, with Global South countries providing raw materials (cobalt, tungsten, copper) and labor for AI training while remaining excluded from shaping AI systems, with only 10% of AI’s economic value projected to accrue to Global South countries by 2030.


– **Environmental Justice and Resource Competition**: Detailed examination of how AI infrastructure creates climate injustice, exemplified by data centers in Mexico’s Querétaro state having 24/7 water access while local communities receive water only one hour per week in a drought-stricken region.


– **Labor Rights and AI Development**: Discussion of exploitative labor practices in AI development, particularly the hidden human labor required for training large language models, often performed under poor conditions in call center-like environments, with concerns about replicating historical labor exploitation patterns.


– **Alternative Approaches to Digital Sovereignty**: Presentation of concepts like “digital solidarity” instead of competitive digital sovereignty, “popular digital sovereignty” from grassroots movements, and community-driven approaches that prioritize local needs and environmental justice over purely technological advancement.


## Overall Purpose:


The discussion aimed to examine the intersection between AI sovereignty aspirations and environmental sustainability, particularly focusing on how developing nations and marginalized communities can achieve greater technological independence without exacerbating climate change and environmental degradation. The panel sought to identify policy solutions that could address both digital dependency and environmental concerns through inclusive, multi-stakeholder approaches.


## Overall Tone:


The discussion maintained a consistently serious and urgent tone throughout, with speakers expressing genuine concern about current trajectories in AI development. The tone was collaborative and solution-oriented, with panelists building on each other’s points rather than debating. There was a notable shift from academic analysis in the early presentations to more activist and practical perspectives as grassroots representatives spoke, culminating in calls for political mobilization and collective action. The overall atmosphere was one of informed concern coupled with cautious optimism about the possibility of more equitable and sustainable approaches to AI development.


Speakers

– **Alexandra Krastins Lopes**: Co-founder of LAPIN (Laboratory of Public Policy and Internet), former member of Brazilian Data Protection Authority, represents VLK Advogados (Brazilian law firm), provides legal counsel on data protection, AI, cybersecurity and government affairs


– **Jose Renato Laranjeira de Pereira**: Co-founder of LAPIN (Laboratory of Public Policy and Internet), PhD student at University of Bonn


– **Ana Valdivia**: Departmental research lecturer in artificial intelligence, government and policy at the Oxford Internet Institute, University of Oxford, investigates how data certification and algorithmic systems are transforming political, social and ecological territories


– **Alex Moltzau**: Policy officer at European AI office in the European Commission, seconded national expert from Norwegian Ministry of Digitalization and Governance, coordinates work on AI regulatory sandboxes, visiting policy fellow at University of Cambridge, background in social data science and master’s in artificial intelligence for public services


– **Pedro Ivo Ferraz da Silva**: Career diplomat, Coordinator for Scientific and Technological Affairs at the Climate Department of the Ministry of Foreign Affairs in Brazil, member of the Technology Executive Committee of UNFCCC, Brazil’s focal point to the Intergovernmental Panel on Climate Change (IPCC)


– **Yu Ping Chan**: Heads digital partnerships and engagements at UNDP (United Nations Development Program), former diplomat in Singaporean Foreign Service, Bachelor of Arts from Harvard University, Master’s of Public Administration from Columbia University


– **Alexander Costa Barbosa**: Member of the Homeless Workers Movement, digital policy consultant and researcher


– **Raoul Danniel Abellar Manuel**: Member of parliament from the Philippines representing the Youth Party


– **Edmon Chung**: From Dot Asia


– **Participant**: (Role/expertise not specified)


Additional speakers:


– **Lucia**: From Peru, works with civil society organizations (full name not provided in transcript)


Full session report

# Panel Discussion Report: AI Sovereignty and Environmental Sustainability


## Introduction and Context


This panel discussion, organized by LAPIN (Laboratory of Public Policy and Internet), the Sustainable AI Lab at Bonn University, and VLK Advogados, examined the intersection between artificial intelligence sovereignty and environmental sustainability. Jose Renato Laranjeira de Pereira, co-founder of LAPIN and PhD student at University of Bonn, introduced the session by explaining the panel’s focus on how the intersection of AI sovereignty and climate change creates both challenges and opportunities.


The panel featured diverse perspectives from government, academia, international organizations, and civil society, including Alexandra Krastins Lopes (co-founder of LAPIN and former member of Brazilian Data Protection Authority), Ana Valdivia (Oxford Internet Institute, participating remotely from an AI ethics conference), Alex Moltzau (European AI Office), Pedro Ivo Ferraz da Silva (Brazilian Ministry of Foreign Affairs), Yu Ping Chan (UNDP), Alexander Costa Barbosa (Homeless Workers Movement), and Raoul Danniel Abellar Manuel (Philippine Parliament member).


## Key Speaker Contributions


### Ana Valdivia – Digital Solidarity vs. Digital Sovereignty


Ana Valdivia argued that AI infrastructure cannot be truly sovereign because it depends on minerals and natural resources from other countries, creating inevitable interdependencies. She proposed replacing “digital sovereignty” with “digital solidarity” to create networks of cooperation between states rather than competition.


Valdivia highlighted environmental justice concerns, citing examples from Mexico’s Querétaro state where data centers have 24/7 water access while local communities receive water only one hour per week during drought conditions. She emphasized that AI development reproduces climate injustice through unequal resource access and that data centers are deployed without democratic consultation with affected communities.


She also challenged industry narratives, arguing that larger language models reproduce more stereotypes and biases while consuming more resources without necessarily being better. Valdivia noted that AI development is now dominated by big tech companies rather than universities, limiting innovation access and creating dependency for researchers.


### Pedro Ivo Ferraz da Silva – Brazilian Government Perspective


Pedro Ivo, speaking after concluding June climate negotiations in Bonn where AI was discussed, argued against creating a false binary between national sovereignty and global cooperation. He maintained that both are needed and should be rooted in equity and climate responsibility, introducing the Brazilian concept of “mutirão” (collective community effort) as a framework for AI governance.


He revealed that 84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design. Pedro Ivo emphasized that developing countries need to strengthen three strategic capabilities: skills, data, and infrastructure to shape AI according to local priorities.


He advocated for moving beyond the “triple planetary crisis” narrative to address a “poly-crisis” including environmental, social, and digital rights crises. Pedro Ivo also mentioned Brazil’s role in hosting COP30 in Belém and chairing the BRICS Summit, noting the BRICS Civil Popular Forum’s work on digital sovereignty.


### Yu Ping Chan – UNDP Development Perspective


Yu Ping Chan warned that only 10% of AI’s economic value by 2030 will benefit Global South countries excluding China, with over 95% of top AI talent concentrated in six research universities in the US and China. She emphasized that digital transformation must be part of a holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy.


Chan raised questions about ownership regarding who owns the products of labor used to create large language models that end up owned by big tech companies. She stressed the need for collective action and mobilization to address AI challenges.


### Alexander Costa Barbosa – Grassroots Movement Perspective


Alexander Costa Barbosa from Brazil’s Homeless Workers Movement introduced the concept of “popular digital sovereignty,” involving communities providing services that the state hasn’t delivered, focusing on meaningful connectivity and digital literacy in peripheries. He explained the movement’s work addressing Brazil’s housing crisis, where 33 million people lack adequate housing.


Barbosa noted that workers’ rights were initially excluded from AI regulation discussions, highlighting the political nature of these debates. He connected alternative development approaches like Buen Vivir and commons-based development with climate justice discussions.


### Alex Moltzau – European AI Office Perspective


Alex Moltzau acknowledged that while AI operates within existing labor legislation frameworks, there are concerns about protecting workers involved in supervised machine learning tasks. He stressed that AI rollout must be as responsible, sustainable, and green as possible within the context of the climate crisis.


Moltzau announced the European Commission’s collaboration with Africa on generative AI with 5 million euros funding, with a deadline of October 2nd.


## Audience Questions and Responses


Raoul Danniel Abellar Manuel from the Philippine Parliament asked about ensuring labor protections in AI development to avoid replicating exploitative practices, especially in training large language models. He emphasized the need to protect workers involved in the hidden human labor required for AI training.


An audience member named Lucia asked about environmental sustainability advocacy and connecting organizations working on these issues. Ana Valdivia responded by offering to connect civil society organizations across Latin America working on data center transparency and environmental advocacy.


## Concrete Outcomes and Initiatives


Several concrete initiatives were announced during the discussion:


– The Hamburg Declaration on Responsible AI for the SDGs was launched with over 50 stakeholders committed, welcoming more organizations to sign


– The BRICS Civil Popular Forum Digital Sovereignty Working Group document was announced for release with guidelines for financing digital public infrastructures


– Commitment to connect Latin American organizations working on data center transparency and environmental advocacy


– European Commission funding for AI collaboration with Africa


## Key Themes and Challenges


The discussion revealed several interconnected challenges:


**Environmental Justice**: The panel extensively examined how AI infrastructure creates climate injustice, with Global South countries providing raw materials while bearing environmental costs but remaining excluded from AI governance decisions.


**Labor Rights**: Multiple speakers addressed exploitative labor practices in AI development, particularly the hidden human labor required for training large language models under poor working conditions.


**Transparency**: The lack of disclosure regarding AI’s environmental impacts was highlighted as a critical barrier to informed policy-making and accountability.


**Digital Colonialism**: Speakers examined how AI development perpetuates colonial patterns, with Global South countries providing resources and labor while being excluded from shaping AI systems.


## Multi-stakeholder Approaches


Alexandra Krastins Lopes emphasized applying the multi-stakeholder model of internet governance to sustainable AI sovereignty policies to effectively include social movements. The discussion highlighted the importance of moving beyond conventional development models toward comprehensive approaches addressing interconnected social, environmental, and digital challenges.


## Conclusion


The panel concluded with calls for continued advocacy and mobilization, emphasizing the need for collective action to address AI challenges. Speakers encouraged political mobilization and highlighted the importance of coordinated efforts between government officials, researchers, international organizations, and social movements to develop more equitable and sustainable approaches to AI development and governance.


The discussion demonstrated that addressing AI sovereignty requires tackling multiple interconnected crises through approaches that empower local communities while ensuring responsible technology deployment and environmental sustainability.


Session transcript

Alexandra Krastins Lopes: Good morning everyone both here in the room and those joining us online. It’s a pleasure to welcome you all to this important session at the Internet Governance Forum 2025. This panel titled AI at the Crossroads between Sovereignty and Sustainability is a joint initiative between LAPIN, the Laboratory of Public Policy and Internet, the Sustainable AI Lab at Bonn University and VLK Advogados. We are truly honored to host such a timely and global conversation and I want to begin by thanking our distinguished panelists for being here today. I’m Alexandra, I’m a co-founder of LAPIN and served for a few years in the Brazilian Data Protection Authority. Today I represent VLK Advogados, a Brazilian law firm where I provide legal counsel on data protection, AI, cyber security and cybersecurity on juridical matters and government affairs. Now I’d like to pass the floor to José Renato who will present himself and introduce the central topic of our discussion. José Renato.


Jose Renato Laranjeira de Pereira: Hello Ale, can you hear me? Yes. So, working good? Okay, great. Well, hello everyone. Good morning, good afternoon, good evening for those watching us. It is a pleasure to be here and thank you very much Ale, Alexandra for introducing me. My name is José Renato, I am also a co-founder of the Laboratory of Public Policy and Internet, LAPIN, and now also doing a PhD at the University of Bonn. I would also like to thank Thiago Moraes and Sietse Piku for helping organize the session and I’m very happy to be here. Well, our discussion is exactly in this intersection between artificial intelligence sovereignty and also the need for us to secure that technological developments that we carry out are consistent also with a very urgent need to tackle climate change, environmental collapse as a whole. So we have been identifying how there’s a growing discourse on not only AI but digital sovereignty as a whole among different governments. European Union is an example. Brazil, China, U.S. Social movements. So, different initiatives among indigenous peoples against worker or among workers movements are also talking about digital sovereignty, AI sovereignty, and etc. In the global south, both these nation-led discourses and also social movements discourse are very interrelated with the history of dependency, particularly on technology and infrastructure that dates back to colonial times and which persists through terms and periods in which coloniality and what many have called digital colonialism is also influencing these discourses. And well, we also know at the same time that AI is deeply connected with physical infrastructure, so it is very dependent, strongly dependent on minerals, on energy and water. So, our idea is how to discuss here how to advance these calls for further independence and control over these technologies and their infrastructures, while also avoiding expanding on the effects over the environment which are leading mostly to climate change. We’re also interested in understanding the differences between global south and north approaches to digital sovereignty, to AI sovereignty as a whole, and that is why we have participants from different, from distinct backgrounds here, government officials, representatives of international organization, academia and civil society as well, including from one social movement in Brazil which is taking the lead to claim digital sovereignty over their activities. So, I pass on now back to Alexandra to talk about the policy questions that we have thought for this panel. I’m looking very much forward to our discussion.


Alexandra Krastins Lopes: Thank you, José Renato. So, today we aim to explore the following policy questions. How can nations reduce their technological dependency in the realm of AI while ensuring that the development of these technologies leaves low environmental impacts and supports them in achieving the SDGs? What are the main tensions between the aspirations of governments and communities, including social movements and indigenous communities, with regard to AI sovereignty and how can they be addressed? And finally, how can the multi-stakeholder model of internet governance can be applied within the design of policies aimed at fostering sustainable AI sovereignty so as to have the demands of social movements effectively taken into consideration? So let’s start with initial speeches from our dear panelists. I would like to check if Ana Valdivia is already with us. Okay, I’ll introduce her. Ana Valdivia is a departmental research lecturer in artificial intelligence, government and policy at the Oxford Internet Institute at the University of Oxford. She investigates how data certification and algorithmic systems are transforming political, social and ecological territories. Ana, the floor is yours. Thank you.


Ana Valdivia: Thank you very much, Jose, for organizing this panel about digital sovereignty and data colonialism. I’m very pleased to be here and I’m so sorry that I cannot be there in Norway because I’ve been attending the main international conference in AI ethics where we have been discussing these current debates, right? And something very relevant that we have found out was that LLMs or generative AI is becoming bigger and that doesn’t mean that it’s becoming better because something that we found out in the conference is that LLMs that are bigger reproduce and learn more stereotypes than smaller LLMs. So that comes with oversight. I’ve been studying the effects, like environmental impacts, and I’ve been analyzing the environmental impacts of AI for years now, and I’ve been analyzing the supply chain of artificial intelligence. And something that I realized is that while national states have this narrative towards digital surveillancy, and for instance in the UK, the government wants to develop more data centers to be digital surveying, there is another part of this debate that is neglected, which is that this infrastructure cannot be surveying, because this infrastructure depends, as you have said, on different minerals and other natural resources that are not embedded in our so-called national states. So that’s it. If the UK wants to become digital surveying, it depends on other countries like Brazil, like Pakistan, like China, like Taiwan, to develop all this infrastructure. For instance, to develop the AI chips, which are named GPUs, graphical processing units, you need cobalt, you need tungsten, you need copper, you need aluminum, and these minerals are extracted from other geographies that are basically geographies within the global majority, and the extraction of these minerals have direct impacts on communities living nearby, as we have seen in the past literature on geography and extractivism. But then the increasing size of AI algorithms like GPTs and other LLMs come with other side effects, as I have said, because now it’s not only about mineral extraction, it’s also about the processing, the training of these algorithms. And this comes also with other environmental impacts like water extraction and land, and I have seen that in Mexico, for instance. So in Mexico, we have the state of Querétaro that is inviting a lot of data centers and a lot of big tech companies to deploy their infrastructure. to talk about AI infrastructure there, while I can see like the positive side of this, which is like, you know, the infrastructure of AI is going to be democratized because it’s going to be present in different states. That comes with other side effects like the government is inviting this infrastructure without asking democratically to the communities, whether they want this infrastructure there. Because this infrastructure, like we know that data centers are that are connected to the electricity 24 hours, seven days a day, 365 days a year. So that means that they are using water, they are using electricity, all the days. And in Querétaro, Querétaro is becoming the only state in Mexico, which 100% of its territory is at risk of drought. So that means that communities don’t have access to water. And this is something that I’ve witnessed with my own eyes, like when I visit these communities in Querétaro, I’ve seen how they don’t have access to water. They only have access to water one hour per week, while on the other side, these infrastructures have access to water 24 hours a day. So AI is not only nowadays reproducing stereotypes and biases, it’s also reproducing climate injustice, because if we don’t regulate how this infrastructure is becoming is being implemented in different geographies, it’s going to exacerbate the consequences of climate justice. So something that we have proposed in this conference on AI ethics is that rather than talk about digital sovereignty, that creates sort of like frictions between states, because all the states in the world want to become digital sovereign, we should talk about digital solidarity. And we should talk about how we can create networks of solidarity, that we help one state with other states, and we help one state with another state. all together to develop digital divinity and how we can become as a community independent from big tech companies that nowadays are accumulating all the innovation. Because for instance, as an expert in AI, when I did my PhD, I could develop my own AI algorithms with my own laptop. And nowadays, I could see that the innovation on AI relies on big tech companies. We are not able to develop AI technology anymore. We have to depend on big tech companies. And it has also become clear in this conference on ethics how the LLMs that we know, like GPT and LAMA, are developed by big tech companies. They are not developed by universities. They are not developed by other institutions, technical institutions anymore. So it’s not only about infrastructure. It’s also about how we can become digital surveying and how we can develop this AI with our own hands and with our own infrastructure. So I think that’s my intervention. And thank you. I’m looking forward to hear what the others have to say and the Q&A. Thank you very much.


Alexandra Krastins Lopes: Thank you, Ana. Now I’ll pass the floor to Alex Moltzau. He joined the European AI office in the European Commission the day it went live as a policy officer, seconded national experts sent from the Norwegian Ministry of Digitalization and Governance to the GCNECT, A2 Artificial Intelligence Regulation and Compliance. He coordinates work on AI regulatory sandboxes and is currently also a visiting policy fellow at the University of Cambridge.


Alex Moltzau: Thank you so much. And it’s a pleasure to be here today. And really great to listen to the intervention of George. and Jose and Ana as well. So my name is Alex Moltzau, as was said, and I also, you know, I think it’s being here today is really wonderful as someone who is seconded from Norway to the Commission to see kind of everyone come together, you know, and I think it’s a really, really bright community. But this topic that we are discussing here is really, really close to my heart. So my background is in social data science, so which combines kind of social concerns with data science methods, I mean, programming oriented, but at the same time, with inspiration from a lot of social science fields. But I also have a master’s in artificial intelligence for public services. And where Jose is placed, they run a conference about AI and sustainability and I spoke at the first edition, although I have not spoken at the ones prior. I previously held a TEDx talk on AI and the climate crisis in 2020. So I think like, for me, I just saw that, you know, I think we were seeing all this compute increasing, you know, infrastructure being built. And with the consumption patterns, you know, in all other fields, it was kind of strange to think that this is not going to be a problem. So I honestly, I think what we are dealing with here, you know, is something that is strange that we haven’t seen much more clear, you know, because I think we want to deliver great services to our people. And we want to also have amazing companies and compete in a friendly way as much as possible. But at the same time, we have a shared problem, you know. And these are expressed through the sustainable development goals. And I worked with AI policy full time for the last five years prior to joining the AI office, where I have worked now for one year. And before that, I worked with a nonprofit organization. and the so-called Young Sustainable Impact. So that had a community of around 11,000 people around the world and we worked to try to think how can we bring forward new solutions and new companies to address the Sustainable Development Goals. But I think maybe we were a bit naive. But I think we have to be naive and I think we have to believe in that brighter future and for sure that is not to just senselessly use technology without any thoughts about responsibility and without the context that we live in. Because we live in a time where we have a climate crisis, we live in a time where we have a plurality of different crises that we are facing and we can only face them in digital solidarity. So I really think that what Ana says and what she said about the minerals is something that also is very clear. And I’m also glad to say that where I’m working now in DigiConnect in the European AI office at the very ground floor of the building, we have a really large artwork and it’s called the Anatomy of an AI System which shows the value chains of Alexa Echo and how that is linked together and it’s an artwork created by Kate Crawford and Vladan Joler. So in a way every single time we walk into the building we are looking at that artwork by Kate Crawford and Vladan Joler. So I think what I like about the European Commission and what I like about the people that work there is that they really care deeply about these things. So I can tell you that for sure it’s not something that we want to ignore, it’s something that we really want to commit towards. But today I’m not talking on behalf of the European Commission, I’m not representing their official perspectives, I’m just here as an individual, but I will still tell you about a few of the things that we are working on. It’s also a collaboration that we have rolled out to finance, also collaboration on generative AI to kind of get new perspectives, solutions, companies in collaboration with Africa. And there’s 5 million euros there committed to this. So I will encourage anyone working here in Europe or in Africa to kind of apply together for that. And the deadline is the 2nd of October this year. So please consider seeing if there’s any kind of good project for collaboration on that. And if you have read the EU AI Act, you might have seen a small part of it. It’s also that there’s a commitment to a standardization request on energy reduction. And there is also a study on green AI running now internally in the commission. So I think this is also kind of like, although I would like to have seen that we did a lot more, it is not like we are doing nothing, I’m happy to say. But I think what we have to do is to think about the rollout of all these large-scale policy mechanisms that we are rolling out now. And it is a lot. Invest AI was announced during the AI Action Summit, 200 billion euros. Investment is not a joke. There’s quite a significant investment there. We’re rolling out AI factories, gigafactories. We have the AI Cloud and Development Act now to try to think about this in more way. There are a lot of movements to really scale up this digital in Europe. But sovereignty doesn’t mean that we should decide for a better future. If sovereignty means that we can make those decisions, if sovereignty means that we can decide to do something that would be better for our citizens, better for the population, then I would think that means that also that rollout has to be as responsible as possible, as sustainable as possible, as green as possible. And of course, that is my personal opinion. And I really look forward to listening to the other panelists.


Alexandra Krastins Lopes: and discussing today. Thank you, Alex. A very interesting thing you said about we shouldn’t use that without the context we live in. José Renato wants to say something.


Jose Renato Laranjeira de Pereira: Yeah, sure. Thanks, Ale. I would just like now to, well, first of all, thank the first two speakers. I think that we already have lots of interesting topics for the Q&A, but I would like now to introduce Pedro Ivo Ferraz da Silva. He’s a career diplomat and the Coordinator for Scientific and Technological Affairs at the Climate Department of the Ministry of Foreign Affairs in Brazil. He’s also a member of the Technology Executive Committee of the United Nations Framework Convention on Climate Change, the UNFCCC, and Brazil’s focal point to the Intergovernmental Panel on Climate Change, the IPCC. And, well, Pedro, the floor is yours, but I would also like to say that I don’t think that there’s no one now more attentive and also with the knowledge of what’s going on in the discussions at the UNFCCC and this intersection with technology than Pedro, and particularly considering the fact that Brazil will host the next COP now in November in Belém. So, Pedro Ivo, the floor is yours. It’s great to see you here.


Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It’s a pleasure to actually reconnect with the IGF after 10 years. I had the honor to organize IGF 2015 in João Pessoa, in Brazil, and by that time AI was actually emerging as a topic and climate change and sustainable development goals in more general terms were rather just a subtopic, you know, in the context of the discussion. So, I’m glad that, you know, after 10 years things have evolved and and we are here delving into very interesting topics. I greet you all from Bonn. We’ve just concluded the June climate negotiations and AI was a very important topic of discussion here. And, you know, tackling of course, the benefits that AI can bring to climate action. And also, of course, the footprint, various environmental footprints, as also Ana Valdivia indicated, that was also part here of the discussions. So, as you know, the world is facing, among many others, the challenge of accelerating digital transformation while staying within planetary boundaries. And as I said, AI is both a powerful tool and a source of new tensions. It can be used in many ways to, for example, model climate risks, forecast disasters. It can be used to optimize infrastructure for low carbon development, but it can also deepen inequality. It can, you know, centralize control and, you know, again, exacerbate many environmental harms if it is left unchecked. So the question, I mean, it’s not whether AI will be used or not. It is already being used. The real question is, you know, who decides how it is used, for what purpose and at what cost? In this context, I think governments have a critical role, not only as regulators, but also as stewards of public interest and also as a driver of innovation and development. You know, governments must ensure that AI governments, governance frameworks are rooted in democratic values. That is very important that AI is aligned with climate goals and also protect human rights. At the same time, I think that these frameworks, they must encourage innovation. And if we look at the climate, you know, innovation within the climate context, I think there is a dire need that AI is not only a driver for innovation for mitigation purposes, but also for adaptation and resilience in vulnerable communities. So I think this discussion that we are having here, and I think again, Lapine and partners for organizing this panel here at the IGF, I think it’s timely, you know, as we look ahead for COP30 in Belém, in the heart of the Amazon, know that the Brazilian presidency of COP30 has proposed a vision, yeah, for the COP that is centered around the idea of mutirão, is a word a bit difficult to, you know, pronounce, but it means it is a collective and community driven effort to tackle, you know, shared challenges. And it is a concept that reminds us permanently that, you know, climate action is not just about technology, but also about, you know, cooperation, participation and shared responsibilities. So in this kind of ethos must also guide us how we approach the governance of AI. And yeah, I mean, the current global landscape of AI, I think, reflects a profound symmetry, you know, while, and we have mentioned it here, AI has an enormous potential to support climate action. Its development and deployment are dominated by a few countries and a few corporations, and I think it was also mentioned by the previous panelists. So most of the world remains excluded from shaping these systems. And at the same time, the environmental footprint of AI is increasing. And here, a very important aspect, while transparency of AI is declining. I mean, a recent study, a study from one, two weeks ago, found that 84% of widely used large language models provide, no disclosure at all of their energy use or emissions. Without better reporting, we cannot assess the actual trade-offs and design, we cannot design informed policies and we can also hold AI and related infrastructures accountable. You know, that’s why inclusive international cooperation is essential and it must be accompanied by, you know, local empowerment. I refer again to another report that was prepared by UNCTAD, its technology innovation report from this year, titled Inclusive AI for Development. You know, AI lays out among many other things that developing countries need to strengthen specifically three strategic capabilities in order to be able also to shape AI skills, data and infrastructure. So, as they turn this as leverage points that will allow countries of the global south, not only to access AI, but to really, you know, shape it in ways that must reflect local priorities, protect, of course, biodiversity, protect natural resources and advance climate justice. And, you know, this is not just about developing new technologies, it’s also about ensuring that AI systems are embedded in institutions, practices and values that are transparent, inclusive, and of course climate aligned. And as we look into the future, I think we should not reject, or let’s say we should reject actually the false binary that exists between national sovereignty and global cooperation. I think we need both of them to be rooted in equity, climate responsibility, and I think the Muchidão spirit kind of conveys this and allows us to come forward. So these are my initial remarks. I thank you all for, again, for the invitation, discussion, and looking forward to the Q&A. Thank you very much.


Jose Renato Laranjeira de Pereira: Thank you very much, Pedro. Great thoughts. Well, I’m really looking forward to the Q&A. But well, for now, I’ll introduce Yu Ping Chan, who is with us on site as well. Yu Ping Chan heads digital partnerships and engagements at the UNDP, the United Nations Development Agency. And before joining the UN Secretariat, Yu Ping was a diplomat in the Singaporean Foreign Service. So lots of diplomats here in this session. Yu Ping has a Bachelor of Arts Magna Cum Laude from Harvard University and a Master’s of Public Administration from Columbia University’s School of International and Public Affairs. Welcome Yu Ping. And now you have the floor, please.


Yu Ping Chan: Thank you so much to the organizers for having me here today. So I represent the United Nations Development Program. As Jose has mentioned, we are the development wing of the United Nations. We’re in over 170 countries and territories around the world, supporting governments through all phases of development, all aspects, sectors, and so forth. The digital programming at UNDP is actually quite extensive. now in more than 130 countries, I believe, supporting them on leveraging digital and AI to achieve the sustainable development goals. And so it’s really very interesting to be part of this conversation and really hearing your thoughts about what is so critical in terms of this intersection between digital and the environment. I’m also very privileged to follow Pedro because I couldn’t agree more with some of the areas that he’s highlighted in terms of the challenges here. And we as UNDP have been actually very privileged to work very closely with the Brazilian COP presidency in the lead up to COP and thinking about how these issues intertwine. So when he talks about these challenges around AI exclusion, AI inequality, this is also the framing that UNDP is looking at. When in terms of considering how the AI revolution is going to potentially leave behind even more countries in the world and really exacerbate the divides between the global South and the global North. When for instance, projections show that only 10% of the economic value that will be generated by AI in 2030 will accrue to the global South majority countries with the exception of China. We really have a situation where the AI future is going to be even more unequal than what we already see today. When presently, for instance, over 95% of the top AI talent of the world is concentrated in six research universities, which are in the US and China basically, you really see how we run this risk of having AI be in some ways the domain of, as already pointed about the panelists, certain exclusive types of monopolies, tech companies and develop in certain ways and not responding to the needs of local populations and the majority of the world. So this is where UNDP really has been looking at how we strengthen local ecosystems, ensure inclusivity in data models, LLMs, and the AI systems that will be generated in the future. This is also not to say that it’s not even just about AI, right? Because even before we have AI, we need to have data. Before we have data, we need to have basic connectivity. Before we have connectivity, we also have to talk about things such as infrastructure and energy, all of which are challenges for the global sub-countries across the globe. So, from UNDP’s perspective, it’s not enough to just think of AI by itself, right? You need to think about the entire developmental spectrum across all these issues and really tie digital and AI, digital transformation itself, as part of this holistic approach that goes beyond just one ministry but really thinks about the broader approaches to sustainability and inclusion and really digital transformation as part of the societal approach as a whole. So, for instance, we’ve initiated a lot of work around some of the areas that other panelists have already highlighted. The gaps around skills, compute, and talent. Just last week in Italy, we launched the AI Hub for Sustainable Development with the Italian Presidency that is a product of the G7 Presidency that is looking at how we can support local AI ecosystems in Africa, strengthen AI innovation, and also partner with AI startups in Africa to bring them to scale and to really build that capacity within Africa to be part of the AI revolution. We’ve also worked on various areas when it comes to digital and connectivity, as well as digital and environmental sustainability and climate issues. We have a Digital for Planet offer where, besides the fact that we’ve worked closely with the Brazilian COP Presidency, we also lead the Coalition on Digital Environmental Sustainability with the International Telecommunications Union, UNEP, the German Environmental Ministry, the Kenyan government, and various civil society organizations such as the International Science Council, Future of Earth, and so forth, to really think about what kind of thought leadership and global advocacy we need around this intersection of digital and environmental sustainability. And this is in addition to the work that is being done, as I mentioned, in UNDP’s country offices all around the world, where we’ve worked on national carbon registry systems, digital public infrastructure for climate in countries like Namibia, Cote d’Ivoire, Costa Rica, Nigeria, Sri Lanka. I have a very long list of many, many projects that I could list, but suffice to say there is a lot of information online about what UNDP is doing in the area of digital environment climate all around the world. But all of this is not to say that it’s enough, because I think some of the other people are saying that it’s not enough. panelists have already talked about how we are aspiring to something a lot greater than just these pieces, right? It’s not enough to say we are doing these projects, we also have to be very thoughtful in how we roll out these projects, roll out these big investments exactly as just how you’ve spoken about. And actually it’s very interesting that Jose invited me to be part of this panel today because this actually came from another convening that we did last year at the IGF in Riyadh, where we were developing what we call the Hamburg Declaration, a responsible AI for the SDGs. And this was actually just launched two weeks ago at the Hamburg Sustainability Conference where we precisely are asking development practitioners, the multi-stakeholder community, governments, investment banks and civil society community to come together to think about how in the use of AI we have to be responsible in how we deploy and use design AI for development outcomes precisely in these areas of people, planet, inclusivity and so forth. So we’ve already garnered over 50 stakeholders that signed on to the Hamburg Declaration on Responsible AI for the SDGs, which is the first multi-stakeholder document in this particular space. We would encourage and welcome more organizations to sign up, make commitments in this regard because it’s precisely that. How do you thoughtfully engage with AI and how do you commit to using AI responsibly in achievement of the sustainable development goals and in environmental sustainability as well. So I look forward to hearing from all of you.


Alexandra Krastins Lopes: Thank you. Now I pass the floor to Alexandra Costa Barbosa. He’s a member of the Homeless Workers Movement, a digital policy consultant and researcher.


Alexander Costa Barbosa: Thank you, Alexandra. Can you hear me? Yes. I would like to say hello to the panelists. I’m really pleased to be here. Thank you for the invitation. I am Alexandra Costa Barbosa. As Alexandra said, I’m a member of the Homeless Workers Movement. Some of you may be asking, what is a Homeless Workers Movement? It’s a housing movement in Brazil. It was founded in 1997. As you can imagine, there’s a huge gap of housing in Brazil. Different statistics, but you can consider even 30 million people living in precarious conditions of housing. So when we think that this state has not the proper tools and instruments to really deal with this issue, people themselves started struggling and fighting for this. I’m just saying that because the same applies to technology and digital sovereignty. Our approach to digital sovereignty, or what we call popular digital sovereignty, and by popular here, I’m referring to the Latin American version of popular, it deals with the massive aspect of sovereignty instead of the so-called folkloric aspect of popular. For us, it’s mainly what we’ve been doing the past five years. It’s like doing things that the state actually haven’t provided to us so far. So really fighting for meaningful connectivity, digital literacy in periphery, in favelas, in slums, and so on. Also fighting for decent work, decent digital labor, beyond the statements in the academy. And then we realized that what we’ve been doing in practice, it’s somehow what we claim by digital sovereignty. But for this specific panel, I think it’s relevant to emphasize that in this first semester, Brazil is also chairing the BRICS Summit in the following week. And within the BRICS structure, there is the BRICS Civil Forum, and in which Brazil also added this popular dimension, so the BRICS Civil Popular Forum. And we also co-led the Digital Sovereignty Working Group with the Landless Workers Movement, which is another social movement really important in Brazil, struggling for the land reform. And this work was really, really interesting. You can, you’re probably gonna have access to this document in the following week, but there we promoted this idea of people-centric digital sovereignty. And we also outlined some guidelines for the New Government Bank to finance digital public infrastructures, having in consideration both people and nature, climate needs, and so on. There are also other guidelines specifically to deal with AI development. And I think it’s really worth checking this document in the following week. When I mentioned here this meaningful access, digital literacy, decent work, and so on, it’s just to highlight that whenever we talk about AI sovereignty, we cannot restrict this discussion to, I think as the other panelists already mentioned it, to like computing power, or to regulatory capacity, or even data capacity, or risk-based regulation, and so on. But also considering this connectivity, electricity access, digital literacy access, and also a transition to decent, better jobs in the AI so-called era. I think that’s mainly the initial contribution that I put in place. If you have any other questions, feel free to reach us. If you’re curious about what a social movement is doing in regards of digital sovereignty, you can also access our website. I will provide later in the chat here, and eventually the moderators can also share with the other attendees. Our approach to digital sovereignty, I think it’s pretty much aligned with the sustainable vision of digital sovereignty. And just to add this more critical aspect of sustainability here, right? We’ve been watching this greenwashing agenda over sustainability in the past 15 years. So eventually it’s a time to change to alternatives to development, right? Especially in Latin America, and briefly speaking here, Latin American environment, we have other agendas, alternative agendas, such as the Buen Vivir, or the Good Living, Buen Vivir, or even the Commons-based development. I think that’s pretty much aligned with this climate justice discussion. Thank you very much.


Alexandra Krastins Lopes: Thank you, Alexandra. Now, I would like to know if we have any questions on floor. Please feel free to join the microphone.


Raoul Danniel Abellar Manuel: Hello. Can I be heard? Yes. Okay, thank you. My name is Raul Manuel and I am a member of parliament from the Philippines representing the Youth Party. So I’d like to address my question to a representative from the European Commission. So, because in the Philippines in our case, we also would like to look at the labor angle of artificial intelligence because to develop the large language models, it also entails a lot of labor, especially for those in like call centers, where the structure is like a call center, but what the people do is actually to train the large language models. And the one thing that we want to ensure also for our citizens is how to not replicate, you know, the old exploitative practices in labor and how that might extend to AI development. So, since the continent of Europe is like some steps ahead in terms of regulating AI, I’d like to ask if ever there are any provisions in your current laws or policies that also touch into protection of labor and workers. Thank you.


Alexandra Krastins Lopes: Thank you. Just a reminder to the speakers, when answering the question, please also state your final remarks. Thanks.


Alex Moltzau: Yes. So I guess this is a question directly to me. And I’m here today in a personal capacity. So I’m not presenting the official views and perspectives of the European Commission. First and foremost, I would like to say that. And I mean, like I have a bit of a background as well as a Norwegian, you know. as a country who cares a lot about labor legislation and about collaboration. I also always actively talk to unions when I travel back to Oslo because I think it’s extremely important to think about the impact on labors and the impact on the way that we work but also the way that we are affected. And I think what you are saying is extremely interesting also because what we are seeing is that all these large language models they require a lot of supervised machine learning. So we have to tag all these different algorithms and that requires a lot of human labor. And I think part of the backdrop here as well is that for example Kenya, there were a lot of movements as well to unionize to see is there any kind of way to increase our rights or to increase the pay that we get for doing all of this work and making sure that these models actually work in practice. So I think your question is extremely timely. And in the European Union we still have fairly strong labor legislation, right? So I think it’s like saying that AI does not operate in a vacuum. We have existing laws. We have existing values. So let’s make sure that those existing laws and the values that we have really are ways that we act in the field of AI because I don’t think it is right now. But I think there is such a long way to go. So I just wanted to thank you for that. And in a sense how do we have ways to handle that within the field of AI is also something that I have seen the European Commission is working on currently but I don’t think I can give you kind of a definite answer on how to protect overall the laborers. AI Act does include kind of concerns regarding employment as well in kind of risk categories. So in this way, at least in our region, it has a consequence. But with that I guess that’s my final comment. And I just pass it to other questions.


Edmon Chung: speakers. Thank you. We have another question. Edmond Chung from Dot Asia. Thank you. Thank you for bringing the topic and especially linking it to sovereignty and digital sovereignty. I think many of the panelists have touched on this and I think Pedro mentioned about the false dichotomy between national digital sovereignty and the global cooperation especially global public interests in my mind. One of and one of the things I think perhaps I’d like to hear from the panel but also to to really think about the personal digital sovereignty as well. I think earlier, sorry I forgot the name of the person, mentioned about popular sovereignty because it’s the personal digital sovereignty and I think Yu Ping mentioned about data coming before AI. The personal digital sovereignty is actually a very important part of you know really safeguarding AI that is people centric like the for for the end end user ultimately. So it is not even a dichotomy. I think coming in order to bring it to full circle it’s both you know it’s not only both it’s the personal digital sovereignty, national digital sovereignty and the global public interest which brings it into the full loop. So yeah that’s that’s my contribution. Okay I’ll take the next


Participant: question and then we’ll go to the answers. Yes you can. Hello thank you for this amazing panel. My name is Lucia and I come from Peru which is a country in which digital divide is also a huge concern so that’s why I think this vision of digital sovereignty also involving things as you know digital literacy and appropriation of the technology. So I would like to ask about environmental sustainability because at least in my country there’s like a race in order to regulate AI. and we are like the first country in our region that has AI law and we are trying to also approve a regulation on this but there’s a huge environmental view missing and we also know that this is also happening in the digital public infrastructure in general so I would like to ask how do you think that, for instance, we as civil society organizations, also with grassroot organizations, can advocate about that without getting into this greenwashing approach that our colleague from Brazil was sharing with us?


Alexandra Krastins Lopes: Thank you. We have less than five minutes, please. The speakers feel free to answer rapidly. Thank you.


Ana Valdivia: Thank you very much. Thank you for this question. I can share my insights from doing fieldwork in Mexico and Chile regarding the sustainability and environmental impacts of data centers there. I think one solution would be to talk to other of your colleagues because there are a lot of social movements in Latin America and I can talk about Surciendo in Mexico, Derechos Digitales. There are also other movements in Chile and if you want to come put you in touch with them because they have been advocating for more transparency. Currently, for instance, in Mexico, we don’t know how much water and energy data centers are using. Chile had a platform where the citizens in Chile could access the environmental reports of data centers and due to pressures by the data center industry, the Chilean government has decided to cancel this platform so data centers are not allowed anymore to report these environmental impacts in this platform anymore. I think we can create this sort of solidarity that I mentioned in my intervention and I will be happy to be in touch and to put you in contact with other organizations in Latin America. Thank you for your question.


Alexandra Krastins Lopes: Okay, Alexandra Barbosa.


Alexander Costa Barbosa: I’d like to also react to the first question as well, and just to emphasize that when you’re dealing at this moment in this contemporary conjuncture, all of this AI discussion, AI sovereignty, AI regulation, AI and environmental sustainability, it has to do with politics, right? In the end of the day. In the beginning of the discussion on AI regulation, it couldn’t have any workers’ right in the final approval legislation, and it pretty much applies for organizing social movements, especially popular movements, grassroots movements, to deal with environmental concerns. We’ve been seeing the indigenous struggle against deforestation in the Amazonian region, for instance, in the past years, and just for you to have a glimpse, the Brazilian Congress is completely against any efforts from the government at this moment. So just to have an idea that it’s much, much more difficult than any specific guideline that we have in mind. Thank you very much for the opportunity.


Alexandra Krastins Lopes: Yu Ping?


Yu Ping Chan: And also, just to add to this dimension, it’s not just politics, right? It’s also the big tech and the profit motivation. So to the first question, this fact that there’s a need for labor regulation, but there’s also a question around who owns the products of those labor in the end, because the LRMs themselves are going to be owned by big tech companies and not freely available to the populations that were putting in the data or the efforts to actually create them. So there’s really all these issues that are tied into technology, which really requires, I think, and I really like this aspect about the mobilization of concerned individuals, groups and so forth that share experiences and thoughts about how to respond to this. So that question about what should we do, and I want to link it to what you had said, that maybe perhaps sometimes we are naive in what we try to achieve. My last closing message I think would be to continue to speak up, to be involved and to really think about how we collectively can make those changes that we want to see.


Pedro Ivo Ferraz da Silva: Pedro? Yes, I know we are, I think the time is over, just I think from all the questions and comments that were made here, I think one conclusion that I draw is that, you know, we need perhaps to move away from some narratives that comes especially from developed countries, that we live in a moment of, for example, a triple planetary crisis, you know, a view that tries to limit the problems that we face in the world. Actually I would rather say we live in a moment of a poly-crisis that contains of course the environmental crisis, but also the social crisis with diminishing labor rights, also while people still fight for, you know, to overcome the challenges of hunger and poverty. So I think that’s, and of course the crisis related to digital rights, which actually is a crisis that has been very central to the debate in IGF 2015 where, that I was participating. So I think we need to, you know, to tackle all this crisis in a coherent way. And I think encouraging social movement and grassroots movement is fundamental. I think technology can play a very important role here by leveraging those movements. So perhaps that is the final message here. Let’s consider that we are facing various crises at the moment and let’s use technology in order to address them in a very coherent way. Thank you.


Alexandra Krastins Lopes: Thank you all for the great discussion. Can we please take a picture? Can you put the speakers on the screen, please? Thank you. Thank you. for the on


J

Jose Renato Laranjeira de Pereira

Speech speed

137 words per minute

Speech length

702 words

Speech time

305 seconds

Growing discourse on AI sovereignty among governments and social movements is interrelated with history of technological dependency dating back to colonial times

Explanation

The speaker argues that current discussions about AI and digital sovereignty are deeply connected to historical patterns of technological dependency that originated during colonial periods. This dependency continues today through what is termed ‘digital colonialism,’ influencing how both governments and social movements approach sovereignty over AI technologies.


Evidence

Examples include European Union, Brazil, China, U.S. initiatives, and social movements among indigenous peoples and workers


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Human rights principles | Legal and regulatory


A

Ana Valdivia

Speech speed

140 words per minute

Speech length

1085 words

Speech time

461 seconds

AI infrastructure cannot be truly sovereign because it depends on minerals and natural resources from other countries, creating interdependencies

Explanation

Valdivia contends that national digital sovereignty is impossible because AI infrastructure requires minerals like cobalt, tungsten, copper, and aluminum that are extracted from different geographies, primarily in the Global South. This creates unavoidable dependencies between countries, making true sovereignty unattainable.


Evidence

UK’s digital sovereignty depends on countries like Brazil, Pakistan, China, Taiwan for minerals needed for AI chips (GPUs)


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Infrastructure | Economic


Agreed with

– Pedro Ivo Ferraz da Silva

Agreed on

Global South bears environmental costs while being excluded from AI governance


Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition

Explanation

Instead of pursuing individual digital sovereignty that creates friction between states, Valdivia proposes a model of digital solidarity where countries work together cooperatively. This approach would help all states become collectively independent from big tech companies that currently dominate AI innovation.


Evidence

Big tech companies now control AI development – researchers can no longer develop AI algorithms independently as they could in the past


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Economic | Legal and regulatory


Disagreed with

– Pedro Ivo Ferraz da Silva

Disagreed on

Digital Sovereignty vs Digital Solidarity Approach


Larger language models reproduce more stereotypes and biases while consuming more resources without necessarily being better

Explanation

Valdivia argues that as large language models become bigger, they actually learn and reproduce more stereotypes rather than improving in quality. This challenges the assumption that larger AI models are inherently better while highlighting their increased resource consumption.


Evidence

Findings from international AI ethics conference showing LLMs that are bigger reproduce more stereotypes than smaller LLMs


Major discussion point

Environmental Impact and Climate Justice


Topics

Human rights principles | Development | Sociocultural


AI development reproduces climate injustice through unequal access to resources like water, with communities having limited access while data centers operate 24/7

Explanation

The speaker demonstrates how AI infrastructure creates environmental injustice by monopolizing essential resources like water. While local communities face severe water scarcity, data centers maintain constant access to water for their operations, exacerbating existing inequalities.


Evidence

In Querétaro, Mexico, communities have access to water only one hour per week while data centers have 24/7 access; Querétaro is becoming the only Mexican state with 100% territory at drought risk


Major discussion point

Environmental Impact and Climate Justice


Topics

Development | Human rights principles | Sustainable development


Data centers in Mexico are being deployed without democratic consultation with communities, exacerbating drought conditions

Explanation

Valdivia criticizes the lack of democratic participation in decisions about AI infrastructure deployment. Governments are inviting data centers without consulting local communities who will bear the environmental costs, particularly in water-stressed regions.


Evidence

State of Querétaro inviting big tech companies to deploy AI infrastructure while becoming 100% at risk of drought


Major discussion point

Environmental Impact and Climate Justice


Topics

Human rights principles | Development | Legal and regulatory


Agreed with

– Pedro Ivo Ferraz da Silva

Agreed on

Need for transparency in AI environmental impact reporting


AI development is now dominated by big tech companies rather than universities or other institutions, limiting innovation access

Explanation

The speaker argues that AI development has become centralized in big tech companies, unlike in the past when researchers could develop AI algorithms independently. This concentration limits broader access to AI innovation and development capabilities.


Evidence

LLMs like GPT and LAMA are developed by big tech companies, not universities or other technical institutions; researchers can no longer develop AI with their own laptops as they could during PhD studies


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Economic | Development | Legal and regulatory


Agreed with

– Yu Ping Chan

Agreed on

AI development is dominated by big tech companies, excluding broader participation


P

Pedro Ivo Ferraz da Silva

Speech speed

115 words per minute

Speech length

1170 words

Speech time

609 seconds

False binary exists between national sovereignty and global cooperation – both are needed and should be rooted in equity and climate responsibility

Explanation

Silva argues against viewing national sovereignty and international cooperation as opposing concepts. Instead, he advocates for an approach that combines both elements, grounded in principles of equity and climate responsibility, rejecting the either-or mentality.


Evidence

Brazilian COP30 presidency’s vision of ‘mutirão’ – collective and community-driven effort for shared challenges


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Legal and regulatory | Development | Human rights principles


Agreed with

– Edmon Chung
– Alexander Costa Barbosa

Agreed on

Multi-level approach to digital sovereignty needed


Disagreed with

– Ana Valdivia

Disagreed on

Digital Sovereignty vs Digital Solidarity Approach


84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design

Explanation

Silva highlights the lack of transparency in AI systems regarding their environmental impact. Without proper disclosure of energy consumption and emissions, policymakers cannot make informed decisions or hold AI infrastructure accountable for their environmental footprint.


Evidence

Recent study from two weeks prior showing 84% of widely used LLMs provide no disclosure of energy use or emissions


Major discussion point

Environmental Impact and Climate Justice


Topics

Legal and regulatory | Development | Sustainable development


Agreed with

– Ana Valdivia

Agreed on

Need for transparency in AI environmental impact reporting


Most of the world remains excluded from shaping AI systems while bearing environmental costs of mineral extraction

Explanation

Silva points out the fundamental injustice where AI development is controlled by a few countries and corporations, while the environmental and social costs of mineral extraction for AI infrastructure are borne by communities in the Global South who have no say in how these systems are developed.


Evidence

AI development dominated by few countries and corporations while extraction impacts affect Global South communities


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Development | Human rights principles | Economic


Agreed with

– Ana Valdivia

Agreed on

Global South bears environmental costs while being excluded from AI governance


Developing countries need to strengthen three strategic capabilities: skills, data, and infrastructure to shape AI according to local priorities

Explanation

Based on UNCTAD research, Silva identifies three key areas that developing countries must develop to move beyond merely accessing AI to actually shaping it according to their local needs and priorities. This represents a pathway from AI consumption to AI sovereignty.


Evidence

UNCTAD Technology Innovation Report ‘Inclusive AI for Development’ identifying skills, data, and infrastructure as leverage points


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Development | Capacity development | Infrastructure


AI can be powerful tool for climate action through modeling risks, forecasting disasters, and optimizing low-carbon infrastructure

Explanation

Silva acknowledges the positive potential of AI for addressing climate challenges, including its applications in risk assessment, disaster prediction, and infrastructure optimization. However, he emphasizes this must be balanced against AI’s environmental costs and governance challenges.


Evidence

AI applications in climate risk modeling, disaster forecasting, and low-carbon infrastructure optimization


Major discussion point

Sustainable Development and AI Applications


Topics

Sustainable development | Development | Infrastructure


International cooperation must be accompanied by local empowerment and community participation

Explanation

Silva argues that effective AI governance requires both international collaboration and meaningful participation from local communities. This dual approach ensures that global cooperation doesn’t override local needs and priorities in AI development and deployment.


Evidence

Brazilian COP30 presidency’s ‘mutirão’ concept emphasizing collective and community-driven efforts


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Human rights principles | Development | Legal and regulatory


Social movements and grassroots organizations should be leveraged through technology to address multiple crises coherently

Explanation

Silva advocates for using technology to strengthen and support social movements and grassroots organizations as they work to address interconnected crises. He sees these movements as essential actors in creating coherent responses to complex challenges.


Evidence

Recognition of poly-crisis including environmental, social, and digital rights crises that need coherent responses


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Human rights principles | Development | Sociocultural


Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises

Explanation

Silva critiques the limited framing of current global challenges as merely a ‘triple planetary crisis’ and argues for recognizing a broader ‘poly-crisis’ that includes social issues like diminishing labor rights, ongoing poverty and hunger, and digital rights challenges that require comprehensive, interconnected solutions.


Evidence

Recognition that current crises include environmental issues, social crisis with diminishing labor rights, ongoing hunger and poverty, and digital rights crisis


Major discussion point

Sustainable Development and AI Applications


Topics

Development | Human rights principles | Sustainable development


Disagreed with

– Yu Ping Chan

Disagreed on

Scope of Current Global Crisis


Y

Yu Ping Chan

Speech speed

195 words per minute

Speech length

1301 words

Speech time

399 seconds

Only 10% of economic value generated by AI in 2030 will accrue to Global South countries excluding China, exacerbating existing inequalities

Explanation

Chan presents projections showing that the economic benefits of AI will be heavily concentrated in developed countries, with the Global South receiving only a small fraction of the value. This distribution pattern will worsen existing global economic inequalities rather than providing development opportunities.


Evidence

Projections showing 10% of AI economic value in 2030 will go to Global South majority countries excluding China


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Economic | Development | Digital access


Disagreed with

– Pedro Ivo Ferraz da Silva

Disagreed on

Scope of Current Global Crisis


Over 95% of top AI talent is concentrated in six research universities in the US and China, creating exclusive monopolies

Explanation

Chan highlights the extreme concentration of AI expertise in a handful of institutions, primarily in two countries. This concentration creates knowledge monopolies that exclude most of the world from participating in cutting-edge AI research and development.


Evidence

Over 95% of top AI talent concentrated in six research universities in US and China


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Development | Capacity development | Economic


Agreed with

– Ana Valdivia

Agreed on

AI development is dominated by big tech companies, excluding broader participation


Digital transformation must be part of holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy

Explanation

Chan argues that effective digital transformation requires coordination across multiple sectors and government departments rather than being confined to technology ministries. The interconnected nature of digital infrastructure demands comprehensive planning that addresses connectivity, infrastructure, and energy needs simultaneously.


Evidence

UNDP’s approach recognizing that before AI you need data, before data you need connectivity, before connectivity you need infrastructure and energy


Major discussion point

Sustainable Development and AI Applications


Topics

Infrastructure | Development | Legal and regulatory


Need for collective action and mobilization of concerned individuals and groups to address AI challenges

Explanation

Chan emphasizes that addressing AI-related challenges requires organized collective action from various stakeholders including individuals, civil society groups, and organizations. She advocates for continued advocacy and collaborative efforts to achieve desired changes in AI governance and development.


Evidence

Hamburg Declaration on Responsible AI for SDGs with over 50 stakeholders signing on as first multi-stakeholder document in this space


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Human rights principles | Development | Legal and regulatory


Question of ownership arises regarding who owns the products of labor used to create LLMs that end up owned by big tech companies

Explanation

Chan raises critical questions about labor exploitation in AI development, pointing out that while workers contribute their labor to train large language models, the resulting products are owned by big tech companies rather than being freely available to the communities that helped create them.


Evidence

LLMs are owned by big tech companies and not freely available to populations that provided data or efforts to create them


Major discussion point

Labor Rights and AI Development


Topics

Economic | Future of work | Human rights principles


Agreed with

– Alexander Costa Barbosa
– Raoul Danniel Abellar Manuel

Agreed on

Labor rights concerns in AI development


A

Alexander Costa Barbosa

Speech speed

121 words per minute

Speech length

823 words

Speech time

406 seconds

Popular digital sovereignty involves communities doing what the state hasn’t provided, focusing on meaningful connectivity and digital literacy in peripheries

Explanation

Barbosa explains that popular digital sovereignty emerges from communities taking initiative to address digital needs that governments have failed to meet. This grassroots approach focuses on practical solutions like ensuring meaningful internet access and digital education in marginalized areas like favelas and slums.


Evidence

Homeless Workers Movement’s work on meaningful connectivity, digital literacy in periphery, favelas, and slums, plus advocacy for decent digital labor


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Digital access | Human rights principles


Agreed with

– Pedro Ivo Ferraz da Silva
– Edmon Chung

Agreed on

Multi-level approach to digital sovereignty needed


Workers’ rights were initially excluded from AI regulation discussions, highlighting the political nature of these debates

Explanation

Barbosa points out that labor protections were not originally included in AI regulation frameworks, demonstrating how these policy discussions are fundamentally political processes where different interests compete for inclusion. This exclusion reflects broader power dynamics in technology governance.


Evidence

Workers’ rights couldn’t be included in final AI regulation approval initially, similar to environmental concerns facing opposition in Brazilian Congress


Major discussion point

Labor Rights and AI Development


Topics

Future of work | Legal and regulatory | Human rights principles


Agreed with

– Yu Ping Chan
– Raoul Danniel Abellar Manuel

Agreed on

Labor rights concerns in AI development


Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions

Explanation

Barbosa advocates for moving beyond traditional development models toward alternative approaches rooted in Latin American concepts like ‘Buen Vivir’ (Good Living) and commons-based development. These approaches offer more sustainable and equitable alternatives that align with climate justice principles.


Evidence

Latin American alternative agendas such as Buen Vivir and commons-based development as alternatives to traditional development models


Major discussion point

Sustainable Development and AI Applications


Topics

Sustainable development | Development | Sociocultural


E

Edmon Chung

Speech speed

140 words per minute

Speech length

211 words

Speech time

90 seconds

Personal digital sovereignty is essential alongside national and global approaches to create people-centric AI systems

Explanation

Chung argues that digital sovereignty must operate at multiple levels simultaneously – personal, national, and global – rather than viewing these as competing approaches. Personal digital sovereignty is particularly important for ensuring that AI systems truly serve end users and protect individual rights.


Evidence

Recognition that data comes before AI, and personal digital sovereignty safeguards people-centric AI for end users


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Human rights principles | Privacy and data protection | Development


Agreed with

– Pedro Ivo Ferraz da Silva
– Alexander Costa Barbosa

Agreed on

Multi-level approach to digital sovereignty needed


A

Alexandra Krastins Lopes

Speech speed

112 words per minute

Speech length

562 words

Speech time

299 seconds

Multi-stakeholder model of internet governance should be applied to sustainable AI sovereignty policies to include social movements effectively

Explanation

Lopes proposes adapting the established multi-stakeholder governance model from internet governance to AI sovereignty policy-making. This approach would ensure that social movements and diverse stakeholders have meaningful participation in designing policies for sustainable AI development.


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Legal and regulatory | Human rights principles | Development


A

Alex Moltzau

Speech speed

168 words per minute

Speech length

1446 words

Speech time

515 seconds

AI rollout must be as responsible, sustainable, and green as possible within the context of climate crisis

Explanation

Moltzau argues that given the current climate crisis and multiple global challenges, any deployment of AI technology must prioritize responsibility, sustainability, and environmental considerations. He emphasizes that technology deployment cannot ignore the broader context of environmental and social crises.


Evidence

European Commission’s commitment to standardization request on energy reduction and internal study on green AI


Major discussion point

Environmental Impact and Climate Justice


Topics

Sustainable development | Legal and regulatory | Development


AI operates within existing labor legislation frameworks, but there’s concern about protecting workers involved in supervised machine learning tasks

Explanation

Moltzau acknowledges that AI development should be governed by existing labor laws and protections, but expresses concern about whether current frameworks adequately protect workers involved in training AI systems. He references unionization efforts in countries like Kenya as examples of workers seeking better protections.


Evidence

EU AI Act includes employment concerns in risk categories; reference to Kenya unionization movements for AI training work


Major discussion point

Labor Rights and AI Development


Topics

Future of work | Legal and regulatory | Human rights principles


R

Raoul Danniel Abellar Manuel

Speech speed

121 words per minute

Speech length

182 words

Speech time

90 seconds

Need to ensure labor protections in AI development to avoid replicating exploitative practices, especially in training large language models

Explanation

Manuel raises concerns about labor exploitation in AI development, particularly in the training of large language models which requires significant human labor in call center-like structures. He advocates for ensuring that AI development doesn’t perpetuate the same exploitative labor practices found in other industries.


Evidence

Philippines context where AI training involves call center-like structures for training large language models


Major discussion point

Labor Rights and AI Development


Topics

Future of work | Human rights principles | Development


Agreed with

– Yu Ping Chan
– Alexander Costa Barbosa

Agreed on

Labor rights concerns in AI development


P

Participant

Speech speed

128 words per minute

Speech length

176 words

Speech time

81 seconds

Environmental sustainability perspective is missing from AI regulation efforts, and civil society must advocate without falling into greenwashing

Explanation

The participant from Peru points out that environmental considerations are largely absent from AI regulation efforts, even as countries rush to develop AI laws. They seek guidance on how civil society can effectively advocate for environmental sustainability in AI policy without falling into superficial greenwashing approaches.


Evidence

Peru as first country in region with AI law but missing environmental perspective in digital public infrastructure regulation


Major discussion point

Environmental Impact and Climate Justice


Topics

Legal and regulatory | Sustainable development | Development


Agreements

Agreement points

AI development is dominated by big tech companies, excluding broader participation

Speakers

– Ana Valdivia
– Yu Ping Chan

Arguments

AI development is now dominated by big tech companies rather than universities or other institutions, limiting innovation access


Over 95% of top AI talent is concentrated in six research universities in the US and China, creating exclusive monopolies


Summary

Both speakers agree that AI development has become centralized in a small number of big tech companies and elite institutions, primarily in the US and China, which excludes most of the world from participating in AI innovation and creates monopolistic control over AI technologies.


Topics

Economic | Development | Legal and regulatory


Global South bears environmental costs while being excluded from AI governance

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

AI infrastructure cannot be truly sovereign because it depends on minerals and natural resources from other countries, creating interdependencies


Most of the world remains excluded from shaping AI systems while bearing environmental costs of mineral extraction


Summary

Both speakers highlight the fundamental injustice where Global South countries provide the raw materials and bear environmental costs for AI infrastructure while having no control over how AI systems are developed or deployed.


Topics

Development | Human rights principles | Economic


Need for transparency in AI environmental impact reporting

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Data centers in Mexico are being deployed without democratic consultation with communities, exacerbating drought conditions


84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design


Summary

Both speakers emphasize the critical lack of transparency regarding AI’s environmental impacts, with AI infrastructure being deployed without proper disclosure of resource consumption or community consultation.


Topics

Legal and regulatory | Development | Sustainable development


Labor rights concerns in AI development

Speakers

– Yu Ping Chan
– Alexander Costa Barbosa
– Raoul Danniel Abellar Manuel

Arguments

Question of ownership arises regarding who owns the products of labor used to create LLMs that end up owned by big tech companies


Workers’ rights were initially excluded from AI regulation discussions, highlighting the political nature of these debates


Need to ensure labor protections in AI development to avoid replicating exploitative practices, especially in training large language models


Summary

All three speakers express concern about labor exploitation in AI development, particularly regarding workers who train AI systems but don’t benefit from the resulting products, and the systematic exclusion of labor protections from AI governance discussions.


Topics

Future of work | Human rights principles | Economic


Multi-level approach to digital sovereignty needed

Speakers

– Pedro Ivo Ferraz da Silva
– Edmon Chung
– Alexander Costa Barbosa

Arguments

False binary exists between national sovereignty and global cooperation – both are needed and should be rooted in equity and climate responsibility


Personal digital sovereignty is essential alongside national and global approaches to create people-centric AI systems


Popular digital sovereignty involves communities doing what the state hasn’t provided, focusing on meaningful connectivity and digital literacy in peripheries


Summary

These speakers agree that digital sovereignty cannot be achieved through a single approach but requires coordination across personal, community, national, and international levels, rejecting false dichotomies between different scales of governance.


Topics

Human rights principles | Development | Legal and regulatory


Similar viewpoints

Both speakers advocate for cooperative rather than competitive approaches to digital governance, emphasizing solidarity and collaboration while ensuring meaningful local participation in decision-making processes.

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


International cooperation must be accompanied by local empowerment and community participation


Topics

Development | Human rights principles | Legal and regulatory


Both speakers emphasize that AI and digital transformation must be approached holistically, considering environmental sustainability and requiring coordination across multiple sectors and policy areas rather than isolated technology-focused approaches.

Speakers

– Yu Ping Chan
– Alex Moltzau

Arguments

Digital transformation must be part of holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy


AI rollout must be as responsible, sustainable, and green as possible within the context of climate crisis


Topics

Sustainable development | Development | Infrastructure


Both speakers advocate for moving beyond conventional development models toward more comprehensive approaches that address interconnected social, environmental, and digital challenges through alternative frameworks rooted in justice and equity.

Speakers

– Alexander Costa Barbosa
– Pedro Ivo Ferraz da Silva

Arguments

Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions


Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises


Topics

Sustainable development | Development | Human rights principles


Unexpected consensus

Critique of larger AI models

Speakers

– Ana Valdivia

Arguments

Larger language models reproduce more stereotypes and biases while consuming more resources without necessarily being better


Explanation

It’s unexpected to find consensus challenging the prevailing industry narrative that bigger AI models are inherently better. This technical critique from an AI researcher directly contradicts the dominant trend toward ever-larger models, suggesting the field may be moving in the wrong direction.


Topics

Human rights principles | Development | Sociocultural


Government officials acknowledging AI governance failures

Speakers

– Pedro Ivo Ferraz da Silva
– Alex Moltzau

Arguments

84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design


AI operates within existing labor legislation frameworks, but there’s concern about protecting workers involved in supervised machine learning tasks


Explanation

It’s notable that government representatives openly acknowledge significant gaps and failures in current AI governance, including lack of transparency and inadequate worker protections. This honest assessment from policy makers suggests genuine commitment to addressing these issues rather than defending status quo.


Topics

Legal and regulatory | Future of work | Sustainable development


Social movement and international organization alignment on systemic change

Speakers

– Alexander Costa Barbosa
– Yu Ping Chan
– Pedro Ivo Ferraz da Silva

Arguments

Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions


Need for collective action and mobilization of concerned individuals and groups to address AI challenges


Social movements and grassroots organizations should be leveraged through technology to address multiple crises coherently


Explanation

There’s unexpected consensus between grassroots social movements and established international organizations on the need for fundamental systemic change rather than incremental reforms. This alignment suggests broader recognition that current approaches are inadequate.


Topics

Development | Human rights principles | Sociocultural


Overall assessment

Summary

The speakers demonstrate remarkable consensus on several critical issues: the concentration of AI power in big tech companies, the environmental and social injustices created by current AI development patterns, the need for transparency and accountability, and the importance of multi-stakeholder approaches that include marginalized communities. There’s also strong agreement on the interconnected nature of digital, environmental, and social challenges.


Consensus level

High level of consensus with significant implications for AI governance. The agreement spans diverse stakeholders from government officials to social movement representatives, suggesting these concerns transcend traditional institutional boundaries. This consensus provides a strong foundation for coordinated action on AI governance reform, particularly around environmental sustainability, labor rights, and inclusive participation in AI development decisions.


Differences

Different viewpoints

Digital Sovereignty vs Digital Solidarity Approach

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


False binary exists between national sovereignty and global cooperation – both are needed and should be rooted in equity and climate responsibility


Summary

Valdivia argues for abandoning digital sovereignty discourse entirely in favor of digital solidarity, while Silva maintains that both national sovereignty and global cooperation can coexist without being contradictory


Topics

Development | Legal and regulatory | Human rights principles


Scope of Current Global Crisis

Speakers

– Pedro Ivo Ferraz da Silva
– Yu Ping Chan

Arguments

Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises


Only 10% of economic value generated by AI in 2030 will accrue to Global South countries excluding China, exacerbating existing inequalities


Summary

Silva advocates for a broader ‘poly-crisis’ framework that encompasses multiple interconnected challenges, while Chan focuses more specifically on economic inequality and AI exclusion as primary concerns


Topics

Development | Human rights principles | Sustainable development


Unexpected differences

Terminology and Framing of Sovereignty Discourse

Speakers

– Ana Valdivia
– Edmon Chung

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


Personal digital sovereignty is essential alongside national and global approaches to create people-centric AI systems


Explanation

While both speakers are concerned with power dynamics in AI governance, Valdivia wants to abandon sovereignty terminology entirely, while Chung wants to expand it to include personal sovereignty. This disagreement on terminology is unexpected given their shared concerns about democratizing AI governance


Topics

Human rights principles | Development | Privacy and data protection


Overall assessment

Summary

The main areas of disagreement center on approaches to digital sovereignty (solidarity vs. combined sovereignty-cooperation), the scope of global crises (poly-crisis vs. focused inequality concerns), and terminology for governance frameworks. However, there is strong consensus on core problems: AI’s environmental impact, exclusion of Global South, labor exploitation, and need for community empowerment


Disagreement level

Low to moderate disagreement level with high convergence on problem identification but differing on solutions and framing. The disagreements are more about strategic approaches and terminology rather than fundamental values, suggesting potential for synthesis and collaboration among the speakers’ perspectives


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for cooperative rather than competitive approaches to digital governance, emphasizing solidarity and collaboration while ensuring meaningful local participation in decision-making processes.

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


International cooperation must be accompanied by local empowerment and community participation


Topics

Development | Human rights principles | Legal and regulatory


Both speakers emphasize that AI and digital transformation must be approached holistically, considering environmental sustainability and requiring coordination across multiple sectors and policy areas rather than isolated technology-focused approaches.

Speakers

– Yu Ping Chan
– Alex Moltzau

Arguments

Digital transformation must be part of holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy


AI rollout must be as responsible, sustainable, and green as possible within the context of climate crisis


Topics

Sustainable development | Development | Infrastructure


Both speakers advocate for moving beyond conventional development models toward more comprehensive approaches that address interconnected social, environmental, and digital challenges through alternative frameworks rooted in justice and equity.

Speakers

– Alexander Costa Barbosa
– Pedro Ivo Ferraz da Silva

Arguments

Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions


Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises


Topics

Sustainable development | Development | Human rights principles


Takeaways

Key takeaways

AI sovereignty cannot be achieved in isolation due to dependencies on minerals, infrastructure, and resources from other countries, requiring a shift from competitive sovereignty to collaborative digital solidarity


AI development is reproducing and exacerbating existing inequalities, with only 10% of AI’s economic value projected to accrue to Global South countries (excluding China) by 2030


Environmental impacts of AI are largely undisclosed, with 84% of widely used large language models providing no information about their energy use or emissions, preventing informed policy-making


AI infrastructure deployment often occurs without democratic consultation with affected communities, creating climate injustice where data centers have 24/7 access to resources while local communities face scarcity


Effective AI governance requires addressing multiple interconnected crises (environmental, social, digital rights) rather than focusing on technology in isolation


Popular/grassroots digital sovereignty involves communities providing services the state hasn’t delivered, focusing on meaningful connectivity, digital literacy, and decent work conditions


Multi-stakeholder approaches must genuinely include social movements, indigenous communities, and grassroots organizations in AI policy design


Labor rights protection is essential in AI development to prevent exploitation of workers involved in training large language models and data processing


Resolutions and action items

European Commission collaboration with Africa on generative AI with 5 million euros funding (deadline October 2nd)


Hamburg Declaration on Responsible AI for the SDGs launched with over 50 stakeholders committed, welcoming more organizations to sign


BRICS Civil Popular Forum Digital Sovereignty Working Group document to be released with guidelines for financing digital public infrastructures


Ana Valdivia offered to connect civil society organizations across Latin America working on data center transparency and environmental advocacy


Encouragement for continued advocacy and mobilization of concerned individuals and groups to collectively address AI challenges


Unresolved issues

How to effectively regulate AI environmental impacts without falling into greenwashing approaches


How to ensure meaningful participation of Global South countries in shaping AI systems beyond just accessing them


How to balance rapid AI development and deployment with environmental sustainability requirements


How to address the concentration of AI innovation in big tech companies versus universities and public institutions


How to implement effective transparency requirements for AI energy consumption and emissions disclosure


How to ensure labor rights protection in AI development across different jurisdictions and regulatory frameworks


How to democratically involve communities in decisions about AI infrastructure deployment in their territories


Suggested compromises

Adopting ‘digital solidarity’ framework instead of competitive digital sovereignty to enable cooperation while maintaining autonomy


Developing AI governance that balances innovation encouragement with climate goals and human rights protection


Creating networks of collaboration between states and civil society organizations to share experiences and strategies


Implementing the ‘mutirão’ (collective community-driven effort) approach to AI governance that emphasizes cooperation and shared responsibility


Strengthening three strategic capabilities (skills, data, infrastructure) in developing countries while maintaining global cooperation


Using existing labor legislation frameworks as foundation for AI worker protection rather than creating entirely new systems


Thought provoking comments

Rather than talk about digital sovereignty, that creates sort of like frictions between states, because all the states in the world want to become digital sovereign, we should talk about digital solidarity. And we should talk about how we can create networks of solidarity, that we help one state with other states… all together to develop digital divinity and how we can become as a community independent from big tech companies.

Speaker

Ana Valdivia


Reason

This comment fundamentally reframes the entire discussion by challenging the competitive nationalism inherent in ‘digital sovereignty’ discourse and proposing a collaborative alternative. It’s intellectually provocative because it suggests that the very framing of sovereignty creates the problems the panelists are trying to solve.


Impact

This concept of ‘digital solidarity’ became a recurring theme throughout the discussion. Pedro Ivo later referenced it directly, and Yu Ping’s closing remarks about collective action echo this sentiment. It shifted the conversation from nation-state competition to collaborative problem-solving.


AI is not only nowadays reproducing stereotypes and biases, it’s also reproducing climate injustice, because if we don’t regulate how this infrastructure is becoming is being implemented in different geographies, it’s going to exacerbate the consequences of climate justice.

Speaker

Ana Valdivia


Reason

This comment introduces a critical new dimension by connecting AI bias research with environmental justice, using concrete examples from Querétaro, Mexico, where communities have water access only one hour per week while data centers have 24/7 access. It demonstrates how AI infrastructure creates new forms of inequality.


Impact

This framing of ‘climate injustice’ influenced subsequent speakers to address environmental impacts more seriously. Pedro Ivo built on this by discussing the need for transparency in AI energy reporting, and Yu Ping referenced the broader developmental spectrum needed to address these inequalities.


We should reject actually the false binary that exists between national sovereignty and global cooperation. I think we need both of them to be rooted in equity, climate responsibility, and I think the Muchidão spirit kind of conveys this.

Speaker

Pedro Ivo Ferraz da Silva


Reason

This comment introduces the Brazilian concept of ‘mutirão’ (collective community effort) as a framework for AI governance, challenging the either/or thinking that dominates policy discussions. It’s culturally grounded yet universally applicable.


Impact

This concept provided a philosophical foundation that other speakers built upon. Edmond Chung later expanded this to include personal digital sovereignty, creating a three-level framework (personal, national, global) that enriched the discussion’s complexity.


Our approach to digital sovereignty, or what we call popular digital sovereignty… it deals with the massive aspect of sovereignty instead of the so-called folkloric aspect of popular. For us, it’s mainly what we’ve been doing the past five years. It’s like doing things that the state actually haven’t provided to us so far.

Speaker

Alexander Costa Barbosa


Reason

This comment introduces a grassroots perspective that challenges both state-centric and corporate-centric approaches to digital sovereignty. It’s particularly insightful because it comes from lived experience of a housing movement that has extended its organizing principles to digital rights.


Impact

This intervention grounded the theoretical discussion in practical organizing experience. It influenced the Q&A session, with participants asking about labor rights and grassroots advocacy, and Yu Ping’s final remarks about the importance of speaking up and collective action.


84% of widely used large language models provide no disclosure at all of their energy use or emissions. Without better reporting, we cannot assess the actual trade-offs and design, we cannot design informed policies and we can also hold AI and related infrastructures accountable.

Speaker

Pedro Ivo Ferraz da Silva


Reason

This statistic is striking because it reveals the fundamental lack of transparency that makes informed policy-making impossible. It connects the abstract discussion of sustainability to concrete governance challenges.


Impact

This transparency issue became a focal point for practical solutions. Ana Valdivia later referenced similar transparency struggles in Latin America, and it reinforced the need for the regulatory approaches that Alex Moltzau described from the European perspective.


It’s not only about infrastructure. It’s also about how we can become digital surveying and how we can develop this AI with our own hands and with our own infrastructure… We are not able to develop AI technology anymore. We have to depend on big tech companies.

Speaker

Ana Valdivia


Reason

This observation about the shift from distributed to centralized AI development is particularly insightful because it comes from someone who experienced this transition firsthand as a researcher. It highlights how technological sovereignty has been eroded even within academic institutions.


Impact

This comment deepened the discussion about what sovereignty actually means in practice. It influenced Yu Ping’s later comments about ownership of AI products and the concentration of AI talent, and connected to Alexander’s points about movements doing what states haven’t provided.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond traditional policy frameworks toward more collaborative and justice-oriented approaches. Ana Valdivia’s concept of ‘digital solidarity’ and Pedro Ivo’s rejection of false binaries created space for Alexander’s grassroots perspective to be heard as equally valid to governmental and institutional approaches. The concrete examples of environmental injustice and lack of transparency grounded abstract concepts in lived realities. Together, these interventions transformed what could have been a conventional policy discussion into a more nuanced exploration of power, justice, and alternative frameworks for technology governance. The discussion evolved from technical and regulatory concerns toward questions of collective action, environmental justice, and community-driven solutions.


Follow-up questions

How can we create networks of digital solidarity between states to help develop digital sovereignty collectively and become independent from big tech companies?

Speaker

Ana Valdivia


Explanation

This addresses the need to move beyond competitive national digital sovereignty approaches toward collaborative frameworks that can challenge big tech monopolies


How can we improve transparency and reporting requirements for AI systems’ energy use and emissions, given that 84% of widely used large language models provide no disclosure?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

Without better reporting, it’s impossible to assess trade-offs, design informed policies, or hold AI infrastructure accountable for environmental impacts


How can developing countries strengthen the three strategic capabilities (skills, data, and infrastructure) needed to shape AI according to local priorities?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

This is essential for Global South countries to not just access AI but actively shape it to reflect local priorities and advance climate justice


How can AI regulation include stronger provisions for labor protection, particularly for workers involved in training large language models?

Speaker

Raoul Danniel Abellar Manuel


Explanation

There’s a need to ensure that AI development doesn’t replicate exploitative labor practices, especially in countries providing data annotation and model training services


How can civil society organizations advocate for environmental sustainability in AI regulation without falling into greenwashing approaches?

Speaker

Lucia (participant from Peru)


Explanation

Many countries are rushing to regulate AI but missing the environmental dimension, and there’s a need for effective advocacy strategies that avoid superficial environmental commitments


How can personal digital sovereignty be integrated with national digital sovereignty and global public interest to create a comprehensive framework?

Speaker

Edmon Chung


Explanation

This addresses the need to move beyond false dichotomies and create frameworks that protect individual rights while enabling national autonomy and global cooperation


How can we address the ownership and control issues around AI products created through Global South labor but owned by big tech companies?

Speaker

Yu Ping Chan


Explanation

This highlights the need to examine who benefits from AI development when the labor comes from one region but the profits and control remain with corporations in another


How can we develop coherent approaches to address the poly-crisis (environmental, social, digital rights crises) rather than treating them as separate issues?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

Moving beyond the ‘triple planetary crisis’ narrative to address interconnected crises including labor rights, poverty, hunger, and digital rights alongside environmental concerns


How can technology be leveraged to support and amplify grassroots and social movements working on digital sovereignty and environmental justice?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

This explores the potential for technology to empower social movements rather than just serve corporate or state interests


How can we ensure democratic participation in decisions about AI infrastructure deployment, particularly regarding environmental impacts on local communities?

Speaker

Ana Valdivia


Explanation

This addresses the problem of governments inviting AI infrastructure without consulting communities who will bear the environmental costs, such as water scarcity


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.