Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation
24 Jun 2025 13:30h - 15:00h
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation
Session at a glance
Summary
This discussion focused on addressing the global AI divide and ensuring equitable access to artificial intelligence technologies, particularly for developing countries and the Global South. The session was moderated by Ambassador Henri Verdier and featured speakers from various international organizations, governments, and regions, serving as a precursor to India’s upcoming AI Impact Summit in February 2025.
Participants identified three primary barriers hindering equitable AI adoption: inadequate infrastructure (including connectivity, electricity, and access to GPUs), skills gaps and lack of technical talent, and insufficient culturally relevant datasets. Minister Cina Lawson from Togo emphasized that without inclusion in AI development, entire regions risk being erased from future knowledge systems. Several speakers highlighted the stark disparities in global AI resources, noting that all of Africa has less than 1% of global data center capacity and fewer than 1,000 GPUs.
The discussion revealed an “optimism divide” where developing countries view AI as an opportunity for growth, while developed nations focus more on risks and regulation. Speakers stressed the importance of moving beyond being mere consumers of AI technologies developed elsewhere to becoming active producers and co-creators. Key solutions proposed included creating shared repositories of AI applications, developing voice-enabled services in local languages, establishing public infrastructure for secure data sharing, and implementing techno-legal regulatory frameworks.
Multiple speakers emphasized the need for inclusive multilateral cooperation through organizations like UNESCO, ITU, OECD’s Global Partnership on AI, and UN initiatives. The discussion concluded with a commitment to continue this dialogue through participatory processes leading up to India’s AI Impact Summit, which aims to democratize AI access and ensure the technology benefits all of humanity rather than perpetuating existing inequalities.
Keypoints
## Major Discussion Points:
– **AI Divide and Barriers to Equitable Access**: The discussion extensively covered the three main barriers preventing equitable AI adoption globally: lack of infrastructure (including connectivity, electricity, GPUs, and data centers), skills gaps (particularly in STEM education and AI literacy), and insufficient access to relevant datasets. Speakers emphasized how these gaps particularly affect the Global South and could lead to further marginalization.
– **Cultural and Linguistic Representation in AI**: Multiple speakers highlighted the critical need for AI systems to be developed in local languages and reflect diverse cultural contexts. There was strong emphasis on ensuring that AI datasets and applications represent the knowledge, languages, and cultural values of all regions, not just dominant Western perspectives, to prevent entire populations from being excluded from the AI-powered future.
– **Multilateral Cooperation and Governance Frameworks**: The conversation focused heavily on the role of international organizations (UN, ITU, UNESCO, OECD) and initiatives like the Global Partnership on AI, Hiroshima Process, and various summits in creating inclusive AI governance. Speakers discussed the need for coordinated global efforts, shared standards, and collaborative frameworks to democratize AI access.
– **Moving from Consumers to Producers**: A recurring theme was the urgent need for developing countries to transition from being mere consumers of AI technologies developed elsewhere to becoming active producers and co-creators. This included calls for joint research programs, shared infrastructure, technology transfer, and ensuring developing nations have a seat at the table in AI design and decision-making.
– **Actionable Solutions and Implementation**: The latter part of the discussion focused on concrete pathways forward, including creating repositories of AI applications that can be shared globally, developing voice-enabled services in local languages, establishing frameworks for secure data sharing, and building capacity through targeted training programs and public-private partnerships.
## Overall Purpose:
The discussion served as a preparatory session for India’s upcoming AI Impact Summit (February 2025), positioned between the Paris AI Action Summit and Delhi summit. The primary goal was to examine the growing AI divide between developed and developing nations and identify concrete, actionable solutions for creating a more inclusive global AI ecosystem that benefits everyone, particularly the Global South.
## Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, with speakers showing genuine concern about AI inequality while remaining optimistic about solutions. The tone was professional yet urgent, with participants acknowledging the severity of the AI divide while emphasizing the need for immediate action. There was a notable shift from problem identification in the early portions to solution-focused discussions toward the end, with speakers building on each other’s ideas and showing strong consensus around key priorities. The moderator’s efforts to keep discussions brief and focused helped maintain momentum and ensure all voices were heard.
Speakers
**Speakers from the provided list:**
– **Abhishek Agarwal** – Government of India, India AI mission representative
– **Henri Verdier** – France’s Ambassador for Digital Affairs, session moderator
– **Cina Lawson** – Minister for Digital Economy and Transformation of Togo
– **Amandeep Singh Gill** – UN Tech Envoy Under Secretary General from the United Nations
– **Yoichi Iida** – Special Policy Advisor to the Minister of Information and Communications from the Government of Japan
– **Mariagrazia Squicciarini** – CEO from the Social and Human Sciences Sector UNESCO
– **Audrey Plonk** – Deputy Director, STI OECD (joined virtually)
– **Andrea Jacobs** – AI Focal Point from Antigua Barbuda (joined virtually)
– **Sharad Sharma** – Founder of iSpirit, India (joined virtually)
– **Tomas Lamanauskas** – Deputy Secretary General of the ITU (joined virtually)
– **Audience** – Various audience members who asked questions
**Additional speakers:**
– **Thomas Lamanskos** – Deputy Secretary General of the ITU (mentioned in introduction but appears to be the same person as Tomas Lamanauskas)
– **Dr. Maria Grazia Grani** – CEO from the Social and Human Sciences Sector UNESCO (mentioned in introduction but appears to be the same person as Mariagrazia Squicciarini)
– **Martina Legal Malakova** – President at GAIA-X Hub Slovakia, Vice chair at SME committee at business at OECD, MAG 2024
– **Deanne Hewitt-Mills** – Runs a global data protection office consultancy
– **Nupur Chunchunwala** – Runs a foundation that unlocks the potential of neurodiverse individuals globally
Full session report
# Comprehensive Report: Addressing the Global AI Divide – Ensuring Equitable Access to Artificial Intelligence
## Executive Summary
This discussion, moderated by Ambassador Henri Verdier, France’s Ambassador for Digital Affairs, served as a preparatory session for India’s upcoming AI Impact Summit scheduled for February 2026. Strategically positioned between the Paris AI Action Summit (February 2024) and the Delhi summit, this session focused specifically on development and inclusion aspects of AI governance, complementing previous summits’ emphasis on existential risk (Bletchley Park) and innovation/governance/environmental impacts (Paris).
The session brought together representatives from international organisations, governments, and civil society to examine the growing artificial intelligence divide between developed and developing nations, with particular focus on ensuring equitable access to AI technologies for the Global South. The discussion was structured around three key questions: identifying barriers to AI access, understanding the stakes of AI exclusion, and developing concrete solutions for inclusive AI development.
The discussion revealed a stark reality: whilst AI promises transformative benefits for humanity, current development patterns risk creating unprecedented inequalities. Participants identified three fundamental barriers preventing equitable AI adoption globally: inadequate infrastructure (including connectivity, electricity, and access to GPUs), significant skills gaps particularly in STEM education, and insufficient access to culturally relevant datasets. The conversation evolved from problem identification to solution-focused discussions, emphasising the urgent need for multilateral cooperation and innovative approaches to democratise AI access.
## Key Participants and Strategic Context
The session featured diverse voices from across the global AI governance landscape, reflecting the collaborative nature of India’s approach to the AI Impact Summit. **Abhishek Agarwal** from India’s AI mission highlighted the country’s innovative approaches, including the Bhashini project for natural language processing, the AI Coach platform for datasets, and India’s provision of 50,000 GPUs at less than $1 per hour. **Cina Lawson**, Togo’s Minister for Digital Economy and Transformation, provided powerful insights from the African perspective, emphasising the existential nature of AI exclusion.
**Amandeep Singh Gill**, the UN Tech Envoy, outlined multilateral frameworks including the Global Digital Compact and the establishment of an international independent scientific panel on AI. **Yoichi Iida** from Japan’s government presented the Hiroshima Process approach and Japan’s recently enacted AI promotion law, whilst addressing unique demographic challenges of an aging society.
**Thomas Lamanauskas**, Deputy Secretary General of ITU (joining virtually), presented comprehensive statistics on global AI infrastructure disparities and outlined multiple ITU initiatives including the Coalition for Sustainable AI and the AI Skills Coalition. **Mariagrazia Squicciarini** from UNESCO contributed perspectives on AI ethics and inclusive development, whilst **Audrey Plonk** from the OECD discussed the Global Partnership on AI’s expansion efforts.
Virtual participants including **Andrea Jacobs** from Antigua and Barbuda, **Sharad Sharma** from India’s iSpirit foundation, and audience members provided additional regional and technical perspectives that enriched the discussion with practical insights and challenging questions about implementation.
## The Three Fundamental Barriers to AI Equity
### Infrastructure Deficits: The Stark Reality of Global Disparities
The discussion revealed alarming disparities in global AI infrastructure. **Thomas Lamanauskas** presented stark statistics showing that Africa, despite representing 18% of the global population, possesses only 1.8% of global data centre capacity. The continent has fewer than 1,000 GPUs available for AI development, highlighting the massive infrastructure gap that must be addressed.
**Cina Lawson** emphasised that infrastructure challenges extend beyond mere connectivity to include reliable electricity supply, which remains inconsistent across much of Africa. She noted that without addressing these fundamental infrastructure needs, countries cannot participate meaningfully in AI development or deployment. **Abhishek Agarwal** countered with India’s innovative approach to compute scarcity, describing how India provides 50,000 GPUs at less than $1 per hour through innovative sharing models that could potentially be replicated in other developing regions.
The ITU’s Digital Infrastructure Investment Initiative, involving seven development finance institutions with 1.6 trillion in assets, represents one approach to addressing these infrastructure gaps through coordinated international investment.
### Skills and Education Gaps: From STEM Crisis to AI Literacy
The skills shortage emerged as a critical barrier, with **Cina Lawson** highlighting a concerning trend of declining interest in mathematics and science education among African children. This foundational challenge threatens long-term AI capacity building efforts across the continent. The discussion revealed that skills gaps extend beyond technical capabilities to include AI literacy among policymakers and the general population.
**Yoichi Iida** noted that even developed countries like Japan face unique challenges, with aging populations requiring trust-building and literacy programmes for AI adoption. Japan’s approach focuses on building trust through its AI promotion law and multi-stakeholder governance models, recognising that demographic transitions create different skill development needs.
The ITU’s AI Skills Coalition, with 50 partners aiming to train 10,000 people, represents one multilateral approach to addressing these capacity gaps, whilst **Thomas Lamanauskas** emphasised the need for tailored approaches that address different demographic contexts and development levels.
### Data Availability and Cultural Representation: The Language Divide
The third barrier—access to relevant datasets—proved particularly complex. **Amandeep Singh Gill** observed that language datasets are concentrated in only six or seven languages, missing crucial cultural contexts that would make AI systems relevant to Global South populations. **Cina Lawson** emphasised that without cultural representation in AI datasets, entire regions risk being excluded from future knowledge systems.
**Abhishek Agarwal** highlighted India’s focus on voice-based AI services through the Bhashini project and local language initiatives as essential for including millions currently outside the digital ecosystem. This approach recognises that text-based interfaces may not be appropriate for populations with limited literacy or different communication preferences, making voice-based AI solutions culturally appropriate alternatives.
The AI Coach platform mentioned by Agarwal represents another approach to democratising access to datasets, though the specific mechanisms for ensuring cultural representation and local relevance require continued attention.
## The Existential Stakes of AI Exclusion
Perhaps the most powerful moment in the discussion came when **Cina Lawson** articulated the existential nature of AI exclusion: “If we are not part of the conversation, we won’t exist in the future. One fear that we have is that imagine the world 20 years from now. And if AI represent the totality of knowledge, if you’re not part of this knowledge, people, if someone coming from I don’t know which planet 20 years from now, looking at the data on the platform, if we don’t exist on this platform, it will mean that we don’t exist at all.”
This framing elevated the discussion beyond technical challenges to questions of cultural survival and representation in human knowledge systems. It influenced subsequent speakers to address AI inclusion not merely as a development issue but as a matter of preserving human diversity and ensuring all cultures have a voice in shaping AI-powered futures.
**Henri Verdier** contextualised this concern within historical patterns, drawing parallels to previous technological transitions like television and GMOs, noting that the Global South’s current enthusiasm for AI stems from hope for development benefits rather than complacency about risks.
## The Optimism Divide: Contrasting Global Perspectives
**Thomas Lamanauskas** introduced a fascinating paradox he termed the “optimism divide.” His research revealed that 70% of people in the Global South view AI as potentially helpful for development, whilst developed countries show greater concern about job displacement and other risks. This counterintuitive finding suggests that those with less access to AI are more optimistic about its potential benefits.
This perspective difference has significant implications for AI governance, suggesting that developing countries may be more willing to embrace AI adoption if barriers are addressed, whilst developed nations focus primarily on risk management and regulation. **Yoichi Iida**’s presentation of Japan’s “AI promotion law” rather than restrictive regulation reflects this different approach, emphasising trust-building and benefit realisation alongside risk management.
## Multilateral Cooperation Frameworks and Concrete Initiatives
The discussion highlighted numerous existing frameworks for international AI cooperation, with speakers revealing both opportunities and coordination challenges in current approaches.
### UN Global Digital Compact and Scientific Panel
**Amandeep Singh Gill** outlined the UN’s Global Digital Compact, which established an international independent scientific panel on AI and mandated global dialogue on AI governance within the UN system. He emphasised ongoing work on innovative financing options, noting 200 consultations conducted with a report to be presented in September. The UN’s clearing house approach for standards development represents another mechanism for coordinated international action.
### Japan’s Hiroshima Process and Multi-Stakeholder Governance
**Yoichi Iida** presented Japan’s Hiroshima Process, which promotes AI company risk assessment and information sharing to foster trust. The Hiroshima Process Friends Group advocates for co-governance involving governments, businesses, civil society, and academia to create trustworthy AI ecosystems. Japan’s recent enactment of AI promotion law demonstrates practical implementation of these principles.
### ITU’s Comprehensive AI Initiative Portfolio
**Thomas Lamanauskas** outlined the ITU’s extensive AI-related initiatives, including:
– The Coalition for Sustainable AI launched at the Paris summit
– AI Standards Summit series (first in New Delhi, second in December)
– AI Skills Coalition with 50 partners aiming to train 10,000 people
– Digital Infrastructure Investment Initiative with seven DFIs worth 1.6 trillion assets
– Upcoming AI for Good Global Summit (May 8-11) and AI governance day in July
### OECD and UNESCO Frameworks
**Audrey Plonk** discussed the OECD’s Global Partnership on AI expansion efforts, aiming to include more countries at different AI development levels, whilst addressing financial divides that limit SME engagement in AI development. **Mariagrazia Squicciarini** highlighted UNESCO’s AI ethics framework and readiness assessment methodology, which help countries evaluate their AI preparedness and implement ethical-by-design approaches.
## From Consumers to Producers: Transforming Global South Participation
A recurring theme throughout the discussion was the urgent need for Global South countries to transition from being mere consumers of AI technologies to becoming active producers and co-creators. **Andrea Jacobs** articulated this challenge clearly: “We are overwhelmingly consumers of AI technologies that are developed elsewhere. And oftentimes our realities, languages or priorities in mind… Most of these companies don’t bear this in mind… the tools that we adopt are not built for us.”
**Cina Lawson** provided five specific recommendations for transformation:
1. Focus on local problems and solutions
2. Ensure local data availability and control
3. Prioritise local languages and cultural contexts
4. Establish research programmes and joint funding initiatives
5. Develop local talent training programmes within Global South countries
**Andrea Jacobs** proposed a two-point action plan emphasising regional cooperation and practical implementation pathways, whilst **Mariagrazia Squicciarini** noted that current AI innovation concentration in few companies limits breakthrough innovation potential from startups and smaller entities, which typically drive radical innovation.
## Innovative Solutions and Paradigm Shifts
### Public-Private Innovation Models and Techno-Legal Regulation
**Sharad Sharma** presented perhaps the most radical critique of current approaches, arguing that “more of the same is a recipe for disaster” and calling for fundamental paradigm shifts. He advocated for innovation architecture that combines public goods with private innovation, arguing that purely private sector-driven development leads to value extraction rather than local value creation.
Sharma proposed techno-legal regulation to replace traditional regulatory approaches, suggesting that conventional regulation is inadequate for preventing gaming by AI service providers. He highlighted India’s development of public infrastructure for controlled data sharing through frameworks like DEPA (Data Empowerment and Protection Architecture), which enables data sharing whilst preserving privacy and local control.
His emphasis on young adults and child safety as global priorities added another dimension to inclusion discussions, recognising that AI’s impact on cultural identity and development requires special attention to vulnerable populations.
### Practical Implementation Strategies and Global Repositories
**Abhishek Agarwal** proposed creating a global repository of AI applications across sectors such as healthcare, agriculture, and education that could be shared and adapted by different countries. This approach would build on India’s Digital Public Infrastructure model, allowing countries to benefit from proven solutions whilst adapting them to local contexts.
The discussion also highlighted the need for global data sharing protocols and anonymisation tools to enable cross-border collaboration whilst preserving privacy and control. **Henri Verdier** emphasised the importance of public research in developing these foundational technologies, drawing parallels to historical examples of public investment in transformative technologies.
### Ethical and Inclusive Design Approaches
**Mariagrazia Squicciarini** advocated for ethical-by-design approaches rather than problem-fixing approaches for AI implementation. She argued that inclusive AI benefits everyone by improving system performance through better, more representative data, making inclusion a technical and business imperative rather than merely a moral one.
This perspective helped shift discussions from charity-based framings of inclusion to practical arguments about AI quality and effectiveness, making the case more compelling for stakeholders focused on performance outcomes.
## Audience Engagement and Broader Inclusion Perspectives
The session included valuable audience participation that expanded the discussion beyond geographic inclusion. **Martina Legal Malakova** raised important questions about companies paying citizens for data use, highlighting economic dimensions of data sovereignty. **Deanne Hewitt-Mills** recommended B Corp standards for measuring social impact, providing practical frameworks for accountability.
Most significantly, **Nupur Chunchunwala** challenged the panel’s focus on geographic inclusion by highlighting that human diversity includes neurodiversity, disabilities, and generational differences that cut across geographic boundaries. This created productive tension between geographic-focused inclusion and broader human diversity considerations, enriching the discussion with recognition that inclusive AI must address multiple dimensions of human difference simultaneously.
## Economic Implications and Sustainable Business Models
The discussion revealed significant challenges in developing sustainable business models for AI infrastructure in the Global South. **Cina Lawson** identified the need for innovative financing approaches and new business cases for shared infrastructure, whilst **Audrey Plonk** noted financial divides that limit SME engagement in AI development and deployment.
**Thomas Lamanauskas** observed that the Global South’s optimism about AI creates opportunities for development-focused applications, contrasting with developed countries’ concerns about job displacement. This suggests different market opportunities and business model requirements across regions, with implications for how international cooperation and investment should be structured.
The ITU’s Digital Infrastructure Investment Initiative represents one approach to addressing financing challenges, though speakers acknowledged that transitioning from high-level commitments to concrete implementation pathways remains a critical challenge requiring continued attention.
## Unresolved Challenges and Future Research Needs
Despite the productive discussion, several critical challenges remain unresolved:
### Coordination Among Multiple Initiatives
With numerous multilateral initiatives addressing AI governance—UN Global Digital Compact, Hiroshima Process, ITU programmes, OECD partnerships, UNESCO frameworks—coordination mechanisms to avoid duplication and ensure coherent global approaches require further attention. **Henri Verdier** noted the strategic positioning of the Delhi summit to bridge different regional perspectives, but systematic coordination remains challenging.
### Implementation Pathways and Concrete Mechanisms
The transition from high-level commitments to actionable pathways for AI inclusion needs more detailed planning. **Abhishek Agarwal** identified this as a key challenge requiring continued attention through the preparatory process for the AI Impact Summit.
### Measurement and Evaluation Frameworks
Frameworks for tracking progress on inclusive AI adoption and impact need development. **Audrey Plonk** mentioned OECD’s work on measuring compute capability availability, but comprehensive evaluation systems that capture cultural representation, skills development, and sustainable participation remain nascent.
### Financing Innovation and Risk Distribution
How to finance the massive infrastructure investments needed to bridge the AI divide whilst ensuring sustainable and equitable risk distribution remains unclear. Whilst speakers identified the need for innovative financing options, specific mechanisms and their implementation require further development.
## Pathways to the AI Impact Summit 2026
The discussion concluded with **Abhishek Agarwal** outlining specific commitments to continue dialogue through participatory processes leading to India’s AI Impact Summit in February 2026. These include:
– Public consultations to ensure broad participation
– Working groups focused on specific technical and policy challenges
– Open calls for side events and collaborative initiatives
– Transparent and inclusive processes for shaping the summit’s outcomes
**Henri Verdier** positioned the summit strategically between the Paris AI Action Summit and other international gatherings, emphasising its potential role in bridging different regional perspectives and approaches to AI governance. The collaborative approach, involving partnerships with UNESCO, ITU, and other international organisations, reflects recognition that addressing the AI divide requires sustained multilateral cooperation.
## Conclusion and Strategic Implications
This discussion revealed both the urgency and complexity of addressing the global AI divide, whilst demonstrating growing maturity in international AI governance conversations. The session’s evolution from problem identification to solution-focused discussions, combined with concrete commitments to continued collaboration, suggests meaningful progress toward actionable frameworks for inclusive AI development.
The existential framing provided by **Cina Lawson** and the paradigm-shifting proposals from **Sharad Sharma** elevated the discussion beyond technical problem-solving to fundamental questions about technological sovereignty, cultural survival, and equitable value distribution in the AI era. These perspectives, combined with the practical solutions proposed by various speakers and the concrete initiatives outlined by international organisations, provide a rich foundation for continued international cooperation on AI inclusion.
The upcoming AI Impact Summit represents a critical opportunity to translate these insights into actionable commitments and concrete pathways for ensuring that AI serves all of humanity rather than perpetuating existing inequalities. Success will require sustained commitment to the multilateral cooperation frameworks discussed, innovative financing mechanisms that address real infrastructure and capacity needs, and genuine partnership between developed and developing nations in shaping AI’s future.
The session’s emphasis on moving from consumption to production, combined with recognition of cultural representation as both a moral imperative and technical necessity, provides a framework for AI development that could benefit all participants whilst preserving human diversity. The challenge now lies in implementing these insights through the collaborative processes leading to February 2026 and beyond.
Session transcript
Abhishek Agarwal: excellency, Sina Lawson, Minister for Digital Economy and Transformation of Togo. We’ll have Mr. Thomas Lamanskos, Deputy Secretary General of the ITU, who will be joining virtually. Mr. Amandeep Singh Gill, the UN Tech Envoy Under Secretary General from the United Nations. Then we’ll have Yochi Aida, Special Policy Advisor to the Minister of Information and Communications from the Government of Japan. He has been a firm supporter of us at J-PAY also. Then we’ll have Dr. Maria Grazia Grani, CEO from the Social and Human Sciences Sector UNESCO. UNESCO also has been a key partner with us in our AI journey. Then Audrey Plonk, Deputy Director, STI OECD, will be joining virtually. Welcome, Audrey, and again a key stakeholder and a partner at the J-PAY forums. Then we have Ms. Andrea A. Jacobs, AI Focal Point from Antigua Barbuda, who is joining virtually. And our colleague from India, Sharad Sharma, founder of iSpirit, is also joining virtually. Now it’s my pleasure to hand over to Ambassador Henri Werdia, France’s Ambassador for Digital Affairs, who not only agreed to moderate today’s session, but has also helped in curating it and shaping the very conception of this session. With that, I’m pleased to hand over to Ambassador Werdia to guide the session forward.
Henri Verdier: Thank you. Thank you, Abhishek. Wow, that’s a very difficult task I did accept. As you can see, we have a brilliant set of speakers and brilliant minds. Most of them are friends, and I have the difficult task to be sure that all the nine speakers will speak, and will speak briefly, and will answer a lot of important questions. And as a second point, we will during one and a half an hour speak about a very important topic. I won’t summarize because you will do. But in one sentence, innovation is not always a progress. And progress is not always for everyone. And the question is, with this impressive revolution of AI, How can we be sure that it will benefit everyone, including the emerging economies and the source that I don’t call global, but the vast majority of humankind? So that’s the question today. As Abhishek said, we are meeting here between two important summits of heads of state. The Paris AI Action Summit that was organized last February, and the Delhi AI Impact Summit that will be organized next February. I just want to say that there are very important UN tracks regarding AI governance and ethics of AI, and they are of the utmost importance. But it is worth, in between those tracks, to have some meetings of heads of state, and to see that each of them can put the emphasis of one important aspect of this broad question. So, for example, the first one in Bletchley Park was dedicated to existential risk, and that was great. In Paris, we did speak a bit more about innovation, governance, and environmental impacts. I feel that in Delhi, it will be more focused on development, on inclusion, and benefit for everyone. And that’s a great story, and all the rest of the year, we are working within the different UN processes. So, we’ll start. So, we’ll try to address three questions in 19 minutes. First, to speak a bit about this AI divide from a, let’s say, global source today perspective. Then, what can the multilateral and multistakeholder cooperation give to us? And then, can we define together actionable pathway for inclusive AI ecosystem? And I start with the most difficult part of the debate. I ask to each speaker, if possible, in two minutes, to take the floor. Tell us from your position, your region, your responsibilities, what is the most pressing structural or technological barrier that hinders equitable AI adoption, and why does it matter for global AI systems? And if you agree, Your Excellency, Minister Lawson, I give you the floor. Thank you.
Cina Lawson: Thank you very much, Ambassador Verdier. Good afternoon, everyone. I think it’s a very important question, because from our perspective, when we think of AI, we view it as a tool. And so we say three things. Three things are going to hinder AI development in Togo, or in Africa, or the global south, which is, from our perspective, it’s going to be the lack of infrastructure. So that’s number one. The second is that we need to better train our people. So I would say skills. And the third one is data set, the lack of availability of these data sets. In terms of infrastructure, we think there is almost, we’re still struggling with connectivity, with reliable electricity. As you know, we won’t have access to GPUs or data centers. So when we think about how to better include the global south in these conversations, we need to think about how to fund this infrastructure, which types of business models do we need to support in order for this infrastructure to fill this gap? So that’s number one. But number two is that when you say that AI is important, it has to be, we need to think of it as a human-centric… tool. And so every time from, you know, African perspective, when we think about artificial intelligence, we think that it needs to be used to solve our problems. So defining the problem definition, you know, requires skills. Right now on the African continent, we’re facing a major challenge, which is that we have less and less kids that choose to study math and science, you know. And with that in mind, when we talk about skills, we know that we need to address the education challenge, which is a huge one. And then the third thing I say is data sets. For example, during the pandemic, when Togo used artificial intelligence, we used satellite imagery. So that didn’t require us to have a lot of data within the country. And we also used mobile telcos metadata. But one thing that needs to be said is that if digital transformation is a challenge, it also means that a lot of our countries don’t have the data that they would need on top of which they would apply, you know, algorithms. So building these data sets, which are hundreds or tens of projects that we need to develop, is also something we need to look into in order to be relevant. And why is it important to be relevant is that AI is a great tool. That’s number one. Number two is that if we are not part of the conversation, we won’t exist in the future. One fear that we have is that imagine the world 20 years from now. And if AI represent the totality of knowledge, if you’re not part of this knowledge, people, if someone coming from I don’t know which planet 20 years from now, looking at the data on the platform, if we don’t exist on this platform, it will mean that we don’t exist at all. So it’s extremely relevant that we be part of this because it’s going to define whether we get to even exist or not.
Abhishek Agarwal: Thank you, Minister. Abhishek? Yeah, I kind of echo the views of Her Excellency, like the three key ingredients for any AI application or model are infrastructure, compute mainly, and talent, skills, and data sets. In fact, when we were designing our AI strategy, we realized that on skills, we are pretty up the ladder because we are known as the tech capital of the world. Our engineers are part of almost every major initiative in digital transformation within India also. We have implemented India’s stack and digital public infrastructure. So on talent, skills, we are pretty okay. But when it came to availability of compute infrastructure and data sets, we had a lot of work to do. So the AI mission that we are implementing focuses a lot in enabling compute available. And what we have done is that we don’t have as many GPUs as the US has or the big tech companies have, but we have made around 50,000 GPUs available at a very low cost, less than a dollar a GPU per hour, which is available to Indian researchers, academicians, startups, so that they can start training models, they can do inferencing, they can build applications in healthcare, agriculture, education, and other sectors. So that’s one initiative that we have taken to address the gap in compute infrastructure. The other is about data sets. How do we ensure the data sets on both the public and the private sector across domains, across sectors are made available? For that also, we built a platform called AI Coach in which we are incentivizing all key stakeholders to contribute to data sets that are AI ready, that are shareable through APIs, which can be used by developers and by entrepreneurs to build applications. When we look at adoption of AI globally, what do we do? We believe that these bottlenecks are there across, in fact, most… of the AI today is controlled by a few companies in a few countries. Our focus, the impact summit that we will be hosting next year, will be on like how do we democratize access to AI compute, data sets, algorithms, how do we ensure that that the benefits of AI are used for solving societal problems in health care, in agriculture, in solving problems of science and maths education, how do we address the lack of teachers who are there, how do we make education available in mother tongue, so language becomes a very very important component. What we are working in India through a project called Bhashini, which is a natural language processing, is that enabling various services in all Indian languages and mainly through voice. The voice-based LLM is our focus area and when we are able to offer services through a voice command in the mother tongue, then we will really be able to empower millions of people who are out of the digital ecosystem and when that happens it results in a lot of benefits in yield, in productivity, in benefits. So I would say that global consensus on focusing on democratizing AI, making the global south part of the conversations, ensuring that the compute, the data sets are available, algorithms are shared, applications are shared, will go a long way in ensuring that the whole world becomes a key stakeholder in AI conversations and not just ends up being AI users of solutions provided by a few companies. Thank you very much. Friends, before giving you the floor, Yoshi-san, we know that Amandeep Singh and Thomas have to leave earlier because you have other engagements, so maybe I will pass the floor to Amandeep and then to Thomas and maybe you can be a bit longer because we’ll continue the conversation without you. So maybe Amandeep, if you can
Henri Verdier: also tell us why does this topic matter and what can the multilateral and multi-stakeholder system provide as solutions. So you have four minutes. or five.
Amandeep Singh Gil: Thank you. And thank you to you and to Abhishek for getting us together. I think there’s strong momentum coming out of the Paris AI Action Summit, going into the summit to be hosted by India. The focus on AI Action, AI Impact is an important turn in the conversation. And I agree with you that of course, there are more inclusive processes and they have their role. I’ll come back to that in a bit. But engaging leaders on a regular basis is important. Leaders everywhere are talking about AI, they’re acting on the AI related challenges, and it’s good to get them together in this summit format. Now, the agenda which Abhishek has described is very much welcome. I think on top of the existing digital divide, we have a looming AI divide. All of Africa, less than a thousand GPUs, less than one percent of the data center capacity. Most of the data sets, language data sets are in six or seven languages. The cultural context is also very specific, North American, Western Europe. And we already see in many parts of the world where there are efforts, for example, in Japan, in the Gulf, in many other parts of the world, to find more contextually relevant data sets, find more use cases that are appropriate for that context. And Sina spoke about those use cases. I think we have to have this global dynamic and a local dynamic without which we cannot really democratize the opportunity and advance progress on the sustainable development goals. Just a moment to reflect. to reflect on how this connects with the ongoing work at the UN. Of course, you will hear from the ITU and the UNESCO on their longstanding work on AI issues, the AI for Good Summit, the AI ethics framework, but we took a decisive turn last year when the Global Digital Compact was adopted. It’s the reflection of the high-level advisory body on AI and Sharad is here. He was a distinguished member. Landed in those negotiations and led to key decisions. So one decision was on setting up an international independent scientific panel. We need those regular scientific assessments. It’s a fast-moving technology. It’s going to impact on various sectors, employment, for example, the environment, that aspect was mentioned. So we need regular assessments based on a global perspective, not the perspective of a region or a few companies, but a global scientific perspective. Alongside that, we need a regular global dialogue on AI governance within the United Nations. So the summits are there. They are important moments for leaders to engage, but on a sustained basis, on an inclusive basis, we need that dialogue so we can learn from each other the experience of the EU with the AI Act, what’s working, what’s not working, China’s experience with inter-immersions on Gen-AI, other approaches need to see what works, what doesn’t work, and also ground all this effort in our shared norms, international law, the International Human Rights Treaties, the SDGs, and other commitments on environment, on gender, and so on. So that dialogue is crucial. And then we need to work on AI capacity building. I mentioned the AI divide. The GDC asked the SDG to come back with a report on innovative financing. options for AI capacity building. That draft has been finalised based on nearly 200 consultations, a lot of work across the UN system, and this will be presented, the report will be presented in September. It will allow governments and other actors, philanthropies, private sector to consider all these aspects of compute data, talent development, the shareable open use cases and how to invest in those so that the effort, for example, launched in Paris, current AI, or the efforts recently embarked on by G7 countries, they can be put into a globally cohesive, impactful framework. And finally, there’s work on standards. I’m sure Thomas will go into that. There was a decision in the GDC in the sense of we should have a regular engagement, a clearing house kind of engagement on standards. We build up those standards into a more coherent, more impactful set of soft regulation. So the AI safety institutes are there, started at Bletchley Park, taken forward in different ways, now being rebranded. I’m sure India has done some thinking on that. There’s aspect of children’s safety, which is being thought about. So how do we build standards in these various areas and come together on a regular basis, again, for the industry to benefit and for the tech community to build this technology in a trusted way. So I’m sorry I have to leave, but I leave you with these thoughts, and we are looking forward to the February AI Summit, and we will support the summit organizers, the co-hosts going forward, just as we did in Paris. Thank you.
Abhishek Agarwal: Thank you, Amandeep. Are you in Geneva, Thomas? We miss you.
Tomas Lamanauskas: Thank you very much, Henri. itself. So indeed, great to join you from Geneva. Regretfully, it cannot be in Norway because of our council is ongoing, so our annual council, but really great to see you there. And goodbye, Amandeep, I think. Please, please. Amandeep did leave without his phone, so there is a trouble on the scene. But we are listening to you, and again, regarding the two questions, the importance of equitable AI, and what can we do, mentioning that the ITU did organize for a long time the AI for Good initiative, and you have quite an experience on it. Thank you, thank you very much, Nouriel. Indeed, it’s a pleasure to be joining this panel even this time virtually, especially as we had a very great presence and collaboration with for the AI Action Summit in France, and indeed, where we together launched a coalition for sustainable AI, as we mentioned, sustainability being a key part of that conversation in Paris. And indeed, we’re now looking forward to the AI Impact Summit in India, of course, next February. And again, we’re coming there not for the first time, you know, just last year we had as part of our, on the sidelines of our World Engaged Organizations Assembly, we had a, you know, related but also independent two important events here for AI Impact India, as well as a first AI Standards Summit, and I think Amandeep mentioned how important is the standards collaboration there. So indeed, it’s great to build a network together with, you know, with you, and make sure that this dialogue continues to be inclusive. So now, back to this specific question here about the gaps. And indeed, I think, you know, the three gaps are already quite a few people mentioned. So infrastructure, you know, and I would add finance, you know, because for to kind of help, you know, to have infrastructure there, we need to… in the finance at the end of the day. And I think here, indeed, we still have a huge gap, not only in the basic connectivity, but also in the kind of specific data infrastructure. When we talk, when we look about data centers, the whole Africa has around 1.8% the global data centers at the same time when having more than 18% of the global population. So disparities are pretty big. Of course, skills were mentioned in data sets as well as part of that equation. I would like to add a few more elements and maybe just explore, of course, is innovative capabilities, innovation. If you look at the patents, for example, a rough measure how we look at innovation, you see the two countries joined together really dominate that area. And they’re not living by a lot of percentages. So how do we generate that innovativeness, innovation coming from other areas? And that means how we generate those companies that could also do that. Then trust, the trust gap, because around 60% of the people around the world have issues with AI trust. So that is not necessarily unique to the, let’s say, developing versus developed countries or whichever way we look, but that is an issue around the world. The other thing that I think from our perspective is also important is a policy gap and a policy barrier. Because I think a lot of those things, I mean, they need the solid policies. And I think to create the solid policies is also interrelated. We need policymakers to understand whether they’re regulating or governing. So I think this is a very interrelated topic. And of course, this is a high correlation between having infrastructure and skills in the country and having the policies there as well. So our surveys of the countries have demonstrated that actually, you know, there’s a big policy gap. Still around 55% of the countries say they don’t have a policy, they don’t have the right policies or strategies in place. 85% of the countries don’t have regulatory environments. So this is kind of more detailed. And I think without that, you know, it’s difficult to also address other barriers there. So I think that’s why it is also important. For me, I find it very intriguing, what they would call maybe optimism divide. An optimism divide is inversely related to everything what I said now. If you look at the recent studies, actually, the people in Europe and other developed countries are very skeptical about AI, or not skeptical, they’re more fearful. They say, look, AI will come and take our jobs. Around 70% are actually fearful that AI may take their jobs. Whereas when you look in the Global South, it’s opposite. 70% of the people say, actually, AI may help us. It may help develop our economies. And then, two thirds of the people actually look forward to the applications is health and agriculture and other areas. So I think that’s very interesting. So that means for me that if we are managing to bridge other gaps, infrastructure, skills, innovation, we actually have a ready-made population and talent pools and ready-made consumer areas, if you will, as well that are ready to take up on AI and really use in their daily lives and allow us to kind of drive economic and social development there. Now, what we’re doing from the ITU side and the broader UN side, I think, are complementing what Amadipa was already saying about some of the initiatives there. And of course, now is a crucial moment. Those of you who follow and others of us who follow AI, so-called modalities resolutions, implementation of Global Digital Compact, installing specific modalities, including international scientific panel on AI and global AI governance dialogue. I know the conversation is continuing in New York on this, but of course, we’re not starting from a blank sheet of paper. You know, the UN and ITU has done quite a bit already before to help create that AI governance fabric, if you will. So as Andrea has already mentioned, we have our AI for Good Global Summit that’s running since 2017 already. You know, this brings all the stakeholders together. Last year, we had the first AI Governance Day. They brought around 70 countries together to exchange views on the governance as well. And of course, just in a couple of weeks, actually, from May 8th to 11th of July, we’re looking at the next AI for Good Global Summit with the… second governance day on the 10th of July. We’re expecting significantly more policy makers than last year, including a few heads of state there as well. So the platform, there is a platform to build on. And of course, we’re very happy that in the AI Modalities Resolution discussions, the AI for Good Global Summit has recognized as at least a potential venue, hopefully for the first global dialogue on AI governance that was coming out of the Global Digital Compact. The other piece is, of course, to bring everyone on board is standards, because I think to spread innovation, standards is a key tool to spread innovation so that we don’t innovate, but we can apply the innovation around the world in an interoperable, affordable way. So of course, in this regard, ATU is working, we have our own suite of AI standards, more than 400 of them. But at the same time, we’re working with partners within what is called World Standards Corporation, key partners there, International Electrical Technical Commission, IC, International Standardization Organization, ISO, where we bring the standards community on AI together. Again, last year, as I mentioned already, we had the first AI Standards Summit in New Delhi, India, looking forward in December to our second AI Standards Summit, and then during the AI for Good, on the 11th of July, we’ll have AI Standards Exchange also to bring all the relevant organizations together to progress the joint work on AI standards. So they are relevant, they’re interoperable, they can benefit everyone there. Skills, of course, this is a very broad range of things, we work into juice. Of course, we have AI Skills Coalition, our most recent flagship initiative with 50 partners joining us, where we aim by the end of this year to have at least 10,000 people trained in AI skills in different sets of courses, so different sets of courses. But we have other longstanding parts of initiatives that bring thousands of people around the world, from our innovation factory, AI innovation factory for startups that can participate in pitching competitions, to our AI and machine learning challenges to engineers around the world, and we have a lot of interest from developing. countries. And that and that all these initiatives show us that there is a really strong, there’s really strong, you know, talent pools around the world, they just need to be tapped. And of course, and then just the last maybe initiative of all these, the flagship initiative I wanted to mention is Digital Infrastructure Investment Initiative that we have with seven DFIs, Development Finance Institutions, launched in the Brazilian G20. We’re trying to bridge the general digital infrastructure gap with assets of 1.6 trillion, but within that, of course, the infrastructure. And of course, in just over the week, it’s actually just a week, we’ll have financing for development conference in Sevilla, Spain, where we also go for UN financing for development conference, where we’re going with that flagship initiative and we’re looking how to also engage stakeholders around digital infrastructure investment. So as I, you know, in closing, as I think maybe looking back, you know, so what is also key is inclusivity, you know, around the world. And I mentioned that in our AI for Good last year, we had around 70 policy makers, and some of them said it was the first time when they were in the AI governance discussion. And I think that is very important, because I think it’s important that those discussions don’t just, you know, involve the usual suspects, the countries that already have the capabilities and capacities, but really involve everyone from the get-go. And I think great to see AI impact, action, impact, you know, like format, you know, come to different parts of the world. We have like a very strong participation of developing countries in Geneva. Last year, we had, as I mentioned, AI for Good Impact Summit in India. We’re looking forward later this year, AI for Good Impact Summit for Africa and Cape Town. So we’re really also trying to bridge that inclusivity gap and make sure that this reaches everyone, both in terms of skills, infrastructure, but also, importantly, policy discussions that enable all. So I’ll stop there, Henri,
Henri Verdier: and back over to you. Thank you very much. Thank you very much. And thank you for making time within a big and a huge agenda. So now I will come back to my initial… schedule, but we are changing slightly, so I will try to mix the two questions, Yoichi-san and the other speakers. So, first, what is, from your perspective, the main barrier against equitable adoption of AI, and what can we do in the multi-stakeholder cooperation framework? I observe a tendency to a kind of consensus with the three gaps, but I feel that we are not going far enough. And I was thinking, listening to all of you, sadly, we have examples of great technologies that were not used for the best. Television could have been a brilliant tool for education and didn’t become a real resource for education. Genetically modified organisms could have been a solution for agriculture in tropical eras and didn’t. So, we know from history that sometimes there are brilliant innovations that don’t turn enough into progress. So, collectively, we have to think further, and the position of Japan is very interesting. Thank you very much, Henri, and thank you very much, Abhishek, for the invitation.
Yoichi Iida: So, I try to be brief, but let me talk about the Japan situation before I talk about the international efforts. And if you look at Japan, we have the very unique challenges of the rapidly aging society and also even the decreasing population. So, we really need to make the full use of technology, such as AI, in our society to keep the energy and the liveliness of the society and the community. So, from this perspective, the trust of the people in technology is the key. very, very important element. And also the skills and the literacy are also a very important element in order to make people use the technology without concern and in a very efficient way. Of course, we have a lot of problems elaborated by the colleague from Togo. And also, I envy, Abhishek, when you talk about the skills are OK for India. We have a lot of problems in computing resources, and also the data set, and also the skills and the literacy of the people. But as I said in the beginning, the most urgent problem for Japan is how to make use of this technology to benefit the society. And the literacy and skill of people, and also the trust of the society in the technology is very, very important. That is why we enacted AI law at the end of the last month, which people call not AI regulation law, but AI promotion law. So the law is trying to push the AI usage in the society forward by generating and growing the people’s trust, and also the literacy and the skills. So education is very, very important for us. That requires the government a very radical transformation of the old system, not only in the education, but also the labor’s re-skilling, or maybe the understanding of the people on technology. So a lot of things have to be done before the government. And also, I don’t believe that when we want to make the world a better place, we have to make the world a better place. So I think that’s the most important thing. make use of the AI technology in the society, we have to use the technologies from abroad across the borders. I don’t believe all requirements, all demands of Japanese people for AI can be fulfilled by the domestic technologies and domestic businesses. So, that is why we need very much coherent and interoperable governance frameworks across the regions and across the countries, across the jurisdictions, so that we can make use of the AI models and the systems without concern when they come from abroad. That is why we are promoting the initiative called the Hiroshima Process, which encourages the AI companies to assess the risks and the challenges in their AI models and take appropriate measures and also share the relevant information with the public very openly. By doing so, we believe we can foster the trust among people on this very powerful technology of AI and people can make use of the technology without concern. So, that is our approach and these are our challenges. In order to do that, we have to work not only with the governments from around the world, but also with the stakeholders from businesses, civil society and academia, all kinds of communities, all together to achieve a type of co-governance which will bring a very safe, secure and trustworthy AI ecosystem across the world. So, that is what we are doing now. I hope we share the same understanding with all colleagues here.
Henri Verdier: Thank you very much. Thank you, Yoshi. So I’m going to Maria Grazia. Before the meeting you told me that we need also a technical strategy for inclusion. So maybe you, because there is a, this is too consensual. So we need new ideas.
Mariagrazia Squicciarini: It is consensual. Thank you. Seems like my microphone now is going on. So thanks a lot for the question. I actually had to take note because it got more and more complex by the time we were talking about. And I would like to avoid being repetitive. And it’s true. Perhaps, I mean, you said it before, like a complex problem needs to be unpacked and analysed well in order to find suitable solution. And you asked what are the, you know, the regions of the world. Well, UNESCO has 194 member states. So our territory is the world. And what are the sectors? And, you know, what do we do in different sectors? Well, we cover any sector from culture to education to any other sector in our activities. So the question of the key barriers becomes one in my mind of systematising the problem. That is the endowment. That was an issue that was already raised. The computability. The endowment of the infrastructure. The ability. So it’s not only the physical infrastructure but also the human capital infrastructure. The availability of the relevant skills. Let me add a point there. In order to work, live, thrive in what we can now call the AI era, of course, and we heard, we need to train. Also, Thomas mentioned that. More people in STEM, for instance. People that really deal with AI, build AI. But we also need to endow the population with the social-emotional skills that are needed in order to address the change. Because let’s not forget that, and this goes to the technical issue we were mentioning, that deploying AI in any environment from public institutions to companies entails non-negligible organisational changes. Both in the tasks that people will need to do in their jobs, the very jobs that will be available. and those that will be built. So there are a number of components that go into that. It’s not, let’s say, one component. Perhaps the solution is in the mix. And why this relates also to the governance? Because you need to have, in order to have these assets around the table that allow you to leverage the opportunities that AI may offer, you really need to have the institutions. You need to have the legislation. All of you were mentioning legislation, for instance, initiatives that have been passing in your countries in order to address the AI transformation. And we also need to learn from each other. So that’s exactly what UNESCO has been doing through what we call the readiness assessment methodology. That is an analysis, and for instance, now we’re working with India to finalize theirs, something that gives you a picture of what the country is. It’s not a ranking. So let’s say perhaps the beauty of the inclusiveness of AI is that, between brackets, nobody has it right or wrong. There are good practices, and I emphasize good because the best is not there yet. Nobody has the solution to the problem. But there are good practices from everywhere in the world. And in this sense, the Global South does show, and again, I don’t really like this name, but that’s the way in which typically it’s referred to, because there are different components and different aspects that need to be taken into account, and everybody can contribute to that. And perhaps the narrative has been going in a direction which is somewhat partial. We typically talk about inequalities in AI in a developed versus developing world. Let’s remember, and this is something, for instance, that is very much on the table in the current discussion in the context of the G20, of the within countries inequalities. And you, Yuichi, underline one important, that is the generational one. It is not the same thing to deal with AI when you have a generation that an average age of the population that is relatively low, where it is relatively easy to endow. the population with certain type of skills because they are closer to education, they are more receptive, than when you have a population that is getting older. So there are a number of components that need to be around the table. And perhaps a basic understanding, and this is what Henri was referring to when we were talking before, and is that there is another typically sold kind of false legend, that is that, you know, including basically benefits only the included. Actually, what we know about AI is that if we have biased data, if we don’t have the infrastructure there, if we don’t have governance mechanism, if for instance, a number of languages are not included, and so certain communities don’t have their societal habits, contributions reflected in that, ultimately, the AI itself is performing worse. So it’s not able to address when you go in what it is called in the jargon, in the wild testing, it will perform worse. So it will actually generate system that are not as performing or as fit for purpose as they should. So including, bringing more actors around the table, as you mentioned, for instance, the multi-stakeholder, having better data, more representative data, including women, for instance, in the AI transformation, does not only benefit those that are included, but actually those that included. So ultimately, inclusive AI is actually very good businesses because it’s more accessible and brings better benefits. Another component that we were mentioning is also about the companies. And this refers again to inclusiveness from a different perspective. Now we are talking about who is already in the game and who would like to enter the game, so to speak. So if we think about what is the constituency nowadays in AI, we see a number of very big corporations that typically are from a certain number of countries. Well, there are a number of startups that are aiming to scale up, that are not really finding easy. to do so. But why should it be in our interest to let them do that? Because there is plenty of evidence that radical innovation, breakthrough innovations tend to come from young and small entities. There is plenty of research about that. And so the issue is whether we only care about the AI for today, or we do care also about the AI for tomorrow, because unless we let these companies bloom, the likelihood that the payoffs from AI will be there also tomorrow, new types of AI, are less likely. And these all nevertheless have to happen within some guardrails, like the ones for instance foreseen in UNESCO recommendation on the ethics of AI. And I will close it there because we have seen again and again in history that what is technologically feasible, technically feasible, it doesn’t necessarily need being societally desirable. So perhaps we need to have a conversation about what we do not want AI to do, and let the rest bloom to address the many challenges that society is facing these days.
Henri Verdier: Thank you very much. So I’m going to our online friends, and maybe I will start with Audrey. The OECD is another important multistakeholder body, and you have various initiatives, including hosting the GPIE, that is our favorite project from France, because I did negotiate the beginning of the GPIE seven years ago. So do you think that we are in the good direction to be sure that we do include in this multilateral and multistakeholder conversation, the developing countries and the needs of the developing countries? Yeah, thanks for the question, and hi everybody. It’s great to be here. Thanks for organizing this
Audrey Plonk: really important discussion. So the short answer is yes, I think we’re on an excellent track to being inclusive in the context of the Global Partnership on AI, which is not is no longer a hosted entity at the OECD. It’s part of the OECD which we’re very pleased to have announced last year in India at the summit of GPA. So with that we hope that the expansion of GPA will be interesting to a large set of countries that are at a level of development and AI that they can come to the table and work on a set of different topics many of which have been discussed here today. And so I think in terms of the question around where we see divides you know I will try not to be redundant of things that others have said. But I do think there’s some institutional divides and capacity divides capacity divides in terms of the ability for countries to participate in certain activities. And I think that’s where Thomas mentioned policy divides. And we do see a lot of effort. And I want to commend colleagues in UNESCO and across the U.N. system for their work as well to try to accelerate government’s efforts to develop policies and strategies to put a central to government policy. But we see that there’s still a lot of work to do there. For example we maintain a database the largest database of policy initiatives around the world. And we cover over 72 jurisdictions. But there’s a lot of room for improvement and growth but also learning from each other. And so not just collecting the information and data from countries about their efforts but also finding ways to access share that data that helps build capacity in other countries. So I think on the policy and institutional capacity of countries and governments to participate in the global dialogue there’s a lot of room for us to work collectively to bring others along. And GPA is a place where we are fully committed to doing that. I want to also say about the infrastructure piece because many people have mentioned it and it’s exceedingly important not just for a high development and deployment but also for. general digital transformation. And we see that there’s of course a lot of opportunity there. And I just want to mention one project because it’s new and not not yet totally public of a new methodology we’re developing to measure compute capability availability in different countries. We’re talking about that in the GP context. For those of you who are at that table you know. But I think it will be really important that we put good empirical evidence behind some of the discussions that we’re having at the political and policy level so that we can actually you know eventually move the needle on where things like investment is going and where business opportunities are moving. And so I want to also echo the reality of financial world divides in terms of investment in A.I. and the ability as other colleagues have said for small and medium enterprises which are of course the lifeblood of the economy of the global economy to engage in the A.I. world. And then I think in terms of the skills and education divide I think these are different things. And we need to be we need to think more granularly about what we’re trying to do with skills and with education. Everything from early education and STEM all the way through upskilling and training of workers and aging populations. And I think targeted what we’re seeing is targeted media literacy programs targeted efforts to meet different populations within a country and across countries where they are is extremely important. Not again not just for their ability to adapt to to A.I. coming into their lives but also to generally adapt to digital transformation. And I think the more global cooperation and sharing of experiences that we can have in that regard the better the outcomes will be in the long term. And I completely agree that we need to be thinking longer term not just today and tomorrow. But where. our population where our society is going to be in the next 10 to 20 to 30 years. And in using this technology with that I think you know I would I would say that that finally we see in terms of culture and language which many people mentioned one important effort that we have at the OECD is our AI observatory. You’re probably all very familiar with it. And one of the big efforts there in order to help contribute to a more multilingual multicultural environment is we’re trying to make as much of that data available and multilingual multilingually as possible. And so for example if you go to the live data coverage on the observatory and you look at the media coverage of AI you can see and read about what’s happening in AI and many many many different languages. The same is true for the incident monitor that we’ve been building where we’ve developed a methodology for classifying problems that happen in the ecosystem relating to AI systems. And there you can look in native language across different countries around the world. And so I think the more that we can cooperate both on the data side and on the policy side the bet the better picture that we’re going to have of what’s what’s working and what’s not working. And then finally I think the lastly and I’ll close with this in the interest of time that a big game changer in AI particularly in the developing world is going to be adoption and diffusion across different aspects of industries and society. And I think that’s the case really for all countries and everybody even big countries are grappling and challenged to how do we how do we use this technology. So that’s a shared experience that that everyone is going through. It’s how how do we make ourselves more competitive more productive by using these technologies. And I think that’s a big opportunity around the various international tables multilateral tables to really. work together to get the best possible outcomes for our population.
Henri Verdier: Thank you very much. Thank you very much. So now we are going to Antigua and Barbuda and to ask to Andrea Jacobs, what’s your view regarding this question? Thank you.
Andrea Jacobs: So that’s a very, very good question. And, you know, I’ve heard a lot of unpacking from different regions and Antigua and Barbuda certainly sits with Africa on what was said. So the Caribbean and more broadly among small and developing states, the most pressing structural barrier to equitable adoption is the lack of robust digital infrastructure and institutional readiness. And and this this sentiment, dare I say, is echoed across the global south. And this includes unreliable connectivity, particularly in rural areas and Alta Island areas. Then we have weak data ecosystems which limit our ability to develop context relevant AI. And then we would have limited regulatory and technical capacity to ensure safe, ethical and inclusive AI use. On the technological side, there’s a major imbalance. We are overwhelmingly consumers of AI technologies that are developed elsewhere. And oftentimes our realities, languages or priorities in mind. Most of these companies don’t bear this in mind. And as you know, the people who are the persons or the companies rather who makes the AI products, they might not think about people in the global south or even black and brown people, dare I say. And that’s where the biases come in. So the tools that we adopt are not built for us. And that poses a real, real risks. And and then the question is. Why would this matter, right? Why does this matter for the global south? Well, if these disparities continue to go unaddressed, global AI will continue to serve the few rather than the many, reinforcing existing power imbalances, embedding biases, and excluding billions from shaping the future of technology. And this is why, in Teagan-Barbuda, we speak about having a seat at the table all the time, every time in these AI meetings, because the world needs to know that we are in an era where we are being left behind. The private companies are making these products. We are not getting our voices heard enough. We don’t even have rules and regulations. We don’t have good governance structures. We don’t understand the ethics of AI, how it’s going to impact people in the global south, and more importantly, black and brown people like myself. And the situation is very, very real. And then we’re moving into the context of AGI, which is the next level of AI. And we’re not even, we haven’t even mastered narrow AI as yet, and we’re moving forward towards general AI. So we need to be a part of the conversation, not just as recipients, but as equal partners in co-creating values, rules, and technologies that will define our shared digital future. And then lastly, for us in the Caribbean, and somewhat in the global south, because I talk a lot to my partners in the global south, and we have this view. We need local infrastructure, talent development. We need culturally relevant innovation ecosystems, and we need stronger participation for countries in the global south. As long as we remain primarily consumers of the air products made elsewhere without a. at the design and decision-making table, we risk adopting tools that entrench inequality instead of empowering transformation.
Henri Verdier: Thank you. Thank you very much. And now I’m going to our friend, Shahad. Shahad, I don’t see you so far. Again, same question about this AI divide and how can cooperation help to fix it? Right. So let’s look at digital divide first.
Sharad Sharma: Digital divide in some countries has been coming down quite rapidly. India is an example of that. And there are many lessons to learn from that. At the same time, we must realize that the AI divide is a very big problem, because we know that the first version of AI that we have is actually social media. Social media is entirely AI-driven, right? And that is how the social media platforms ensure that we spend, you know, increasing time on their platforms, you know, year after year. Now, so the question, of course, is how have we done in dealing with these pernicious effects of AI diffusion in the world of social media? I would say we have done very poorly. What is the test? The test is do these new systems change the balance of power between the citizen and the state in the favor of the state? Do these systems change the balance of the power between the consumer and the provider in the favor of the provider? The answer to that is yes. And therefore, ironically, we are in a session of internet governance. We have to go back and look at this and say, why has our current efforts of internet governance, especially when it comes to Web 2.0, failed? You know, this situation is not getting better. It is getting worse. So if we have to make progress with AI, we have to first acknowledge that the last 10, 15 years are years of failure and not perpetuate the things that we have been doing, you know, which have led to this failure. So what are those things? There are three that I like to point out based on the experience of India Stat that Abhishek mentioned early on. First is traditional regulation has to be replaced by techno-legal regulation. Our Prime Minister talked about it at the AI Summit in his brief speech there. This is absolutely essential. The old form of regulation can be gamed by the producers of digital services or AI services. They could do it five years ago, 10 years ago, 15 years ago, and they’ll be able to do it as we move forward. So we have to bring in a new regulatory paradigm and that is techno-legal. There’s a lot of kind of learnings about that here in India which are available to the rest of the world. The second is we have to change the nature of innovation. Innovation has to become an innovation that is built on public goods and private innovation. Because if you don’t have public goods and the innovation is entirely in the realm of private sector, then as Andrea pointed out, the outcomes are going to be terrible. We will all be just consumers, not producers. India will have super teachers, will have super doctors, will have better medical devices, even our students will learn better. But the people who will provide those AI models to make it happen will not be from India. And the value capture of all this will not be in India. So this is a very serious problem that we are looking at. And therefore, we have to look at the innovation architecture itself. And the third, I don’t know whether Amandeep is still there, but one of the big takeaways from the UN AI advisory body is that we have to create a new type of an infrastructure, a public infrastructure for data set sharing that is controlled and yet unlawful. blocks hidden data from companies and countries in a manner that they can control. And in the UN report, that is the recommendation number six, global training data sharing framework, and that is absolutely essential. India is in a very advanced form of building that out. And again, it was mentioned at the Paris AI Summit by a prime minister, we call it DEPA. And why do we call it DEPA? Because it’s about data empowerment and data protection. The two have to go hand in hand, and that requires fresh thinking as well. So I’ll end here by simply saying, more of the same is a recipe for disaster. We must acknowledge as a group of people that we have to make a new beginning. If we don’t make a new beginning, just keep doing what we’ve been doing for the last 10, 15 years, we will not get good outcomes, good results. And then we will just be a talking shop and we’ll collect here again, five, 10 years from now and lament how little has changed from where we are today. So let’s please make a fresh start. You know, the Indian AI Impact Summit will attempt to bring these ideas to the table. And we are hoping that as you participate in that and federally, you get infected by this spirit of making a fresh beginning and taking these new ideas into the realm, into the AI realm as we move forward. Thank you so much.
Henri Verdier: Thank you, Shahad. So now I will reschedule everything regarding the end of this round table. We’ll focus to one question. I will ask this question to the six speakers because Abhishek, you will conclude. And the question is quite simple. Can you share with us your suggestions for actionable pathway, concrete ideas to build really inclusive AI ecosystem? But I have to mention that when we started one hour ago, I did ask to every speaker to speak in two minutes. The only one that did respect the rules was Minister Lawson because then I did change. change the role because of Abhishek, Amandeep, sorry. So we give a bit more time to Minister Lawson, maybe five minutes, and I beg the other speakers to stay in two minutes, because we would love to exchange a bit with the room, if possible. So question, concretely, actionable ideas, what can we do to progress in the field of inclusivity? Minister Lawson.
Cina Lawson: Thank you very much, so the first comment I make is that AI has to work for us. It means that we have to make sure that it is designed to solve our problems, our local problems. The instances where we used AI in Togo were to really solve our problem. Number one, it was to, we used AI to prioritize beneficiaries for our financial aid programs. The second instance when we used AI, it was to design better network. We were deploying fiber networks, and we used AI to really build the itineraries in a way that the networks would be efficient. So I’m saying that because when we think of AI as a tool, and we say, okay, it has to work for us, it also implies a few other things. One issue we faced when we were doing that was the availability of local data. So there is a bit of work that needs to be done to build this data and the data sets. I really appreciated the comment on public infrastructure for data sets that was made earlier. I think it’s extremely relevant for the global south. The second comment I’ll make is that today, most of. of AI and AI platforms are designed for in a language that is not our language. I mean, the majority of what people call the Global South speak different languages and so we need to make sure that the new platforms and the new systems that are designed are designed in local language because by designing them in local language we can have better participation and also relevant data sets because that’s also the issue is that if you build something that seems a bit foreign then it’s extremely difficult. The one comment that was made has also to do with culture to make sure that the data sets represented our culture. And so I’m saying that because if in the future an AI platform will represent reality or will represent the totality of knowledge we have to make sure that our cultures are also represented in these platforms. And I think that the summit with India and India is well known to have such a diverse, you know, culture within the 1.2 billion, you know, population. So I think that India can really drive, be a huge driver to making sure that, you know, there is a diversity in culture. The third thing I would say is that one thing that is extremely important for us in Togo is to make sure that we are part of the solution. So enough, and I think you’ve heard it everywhere, that the Global South does not want to be just consumer. So what it means is that we need more alliances or programs to fund research. Research on the continent, researchers, you know, joint programs. of research when we, I know that we had conversations and many countries had conversations with India and other places to send researchers, to fund research and so on. So research programs are going to be extremely important in this new world and also shared infrastructure because we did mention that we lack GPUs and other things and that we didn’t have enough data centers. So it means that we need to build new programs, new models where we share infrastructure. And I think that we need to build a business case and new business models that take that reality into account. And again, I’m looking to India saying that these types of outcomes need to be discussed during the summit, which kind of models that we need to build so that we can make sure that the global south is part of this new world. The last comment I’ll make, and it’s an important one, is has to do with training. How do we design the new training programs? Because we do realize that we have a training issue, we have a skills issue, but this talent training we need to have conversations about effective talent training. And I think that there’s not a lot of investment being made at the global level with regards to training the talent and without us needing to send talent abroad to be trained. How do we build models and programs within the continent and within the global south so that we improve talent training is also going to be extremely relevant. And I think that when we talk about all these issues, India is pioneering in some of these issues and the conversations need to happen during the next summit. So the key word here is participation, training, and research and local languages. These are all words that are very important.
Henri Verdier: Thank you very much. So for the next speakers, one idea, one priority, how to implement some concrete pathway. Yoshi? Thank you very much. The most important.
Yoichi Iida: Okay, yeah. Actually, if we want to materialize some AI application, AI services, based on the concrete demands and concrete necessity of the people, maybe we need to work in multi-stakeholder way and we need to work together to understand each other and create the AI services which respond to the concrete demands from individual users. And probably the global partnership on AIG Pay would be one of the good forums to realize multi-stakeholder approach. And also India Summit will be also another opportunity. We are also running the Hiroshima Process Friends Group where the many developing countries are joining. And also we work together with AI companies and businesses and also international organizations such as OECD or UNESCO to understand and create the new values through AI services. So we must have many opportunities to realize and materialize the multi-stakeholder approach into the on-the-ground services or on-the-ground AI application.
Henri Verdier: Thank you very much. Thank you. Maria Grazia.
Mariagrazia Squicciarini: So actually I think we should move, now that it seems to agree, we agree on the what, we should move to the whom and the how. Because if we really want impact, we need to know who’s around the table and how we do things. The other thing is to move from fixing the problems exposed to having ethical by design, and ethical means that abides by human rights, human dignity, and fundamental freedoms, because everything becomes much easier then. And then also the other thing that I think is important in order to move towards the impact, is to move beyond a biased data type of approach, whereby we think that if we fix the data to start with, the rest will come with it. Well, because there are a number of inequalities, there might be a number of challenges that emerge by the time we deploy AI systems in the real world. And so moving from having a suitable design, a good implementation, but checking after, as we do with any other product, I think it’s fundamental to make sure that AI responds to what our societal needs. I’m trying to be very disciplined here.
Henri Verdier: And you were, thank you very much. So I’m going online now, and I’m going back to Paris. Audrey, your main idea?
Audrey Plonk: Well, I think the main thing I would offer at this point is to join us at the Global Partnership on AI to advance on some of these topics. And with that, I’ll probably save you a lot of time to get through the other speakers, but there’s a lot of really exciting work happening. It’s founded on the OECD AI principles, which looks like deploying AI in agriculture, working on AI in transport systems. And so I invite you all to come work with us. Thank you. Thank you very much. And we are coming back to Caribbean region.
Henri Verdier: Andrea?
Andrea Jacobs: Okay, so I’m gonna choose my top two, even though I have maybe five. So until we progress. to becoming producers of the air products, we remain first and foremost consumers. So we need to understand that. And as consumers, we have a vested interest in how these technologies are built, governed, and applied. We need to understand that. First of all, we are consumers at the moment. We will progress to being producers, but until then, we remain consumers. And that is why the Global South must use our collective strength. We must use our collective voices to ensure that we advocate for inclusive, transparent, and accountable air governance frameworks. Then the second thing is we need to start to develop and invest in local data ecosystems, data rights, all that sort of stuff. We need to ensure that our people have the knowledge and the skills to retool and to upscale. So those are my top two. And I’ll pass it on to Sharad for brevity.
Henri Verdier: Thank you very much. Sharad?
Sharad Sharma: You know, I mentioned some points last time, but I’ll share another learning that we’ve had. But to place it in context, today, India will do more than 50% of the world’s digital transactions. These are not just commerce transactions. These are direct benefit transfer transactions, which the poor people in India rely on to get their benefits from the government. And those benefits, of course, come from our taxpayers. This also includes the taxpaying transactions. And again, India leads in that. More than 50% of the world’s taxpaying transactions by volume happen in India. And so therefore, all this has happened since 2012. Because many reasons, techno-legal, DPI, and stuff like that. But in addition to that, as Andre knows, we were relentlessly focused on this persona that we had to take care of. And that was a street vendor called Rajini. Some of you may have seen those slides. And by being relentlessly focused on that street vendor for the past 13 years, it kept us focused, determined to be able to solve this problem that we’re talking about. Now, when it comes to AI, we are gravitating to picking on young adults as our focus area. Because young adults, while the AI may lift them. and make them better students, but it has also the potential to have an enormously destructive effect on their lives. They may lose their cultural moodings, they may get distracted by pornography, they may get distracted by gambling and gaming. There are a number of concerns that arise when it comes to child safety. And adult safety is important, when I would say child safety is super important. And it is also important from this sovereignty perspective, is each country perpetuating the culture who these young children that are going to be, you know, living digitally and using AI systems. So that is our focus. And I would suggest that this ought to be a global focus, not just an India focus, not just a global South focus, it ought to be a global focus. And if we now rally around this and measure ourselves and say, are we making progress in protecting our children while we empower them with AI? I think that we will have more flexibility in deciding what works, what doesn’t work, because that will be the… If it’s not working, let’s try something else to make progress over the next 10, 15 years. So this would be my submission to all of you.
Henri Verdier: Thank you, Shahad. So thanks to all of you, because you did save time, so I can ask the same question to Abhishek. Then we’ll take a few questions. So if you are new in the IGF system, I tell you, if you want to ask a question, you go in line after the first speaker, and then Abhishek, you will conclude completely. So your two points regarding concrete outcome.
Abhishek Agarwal: Yeah, like what we need to do, like a lot has already been spoken, and I would say that if I have to list what we need to do individually as countries. In India, of course, our focus is to build in voice-enabled services so that we use the technology of AI, NLP, and Gen AI to empower those who are not part of the digital ecosystem. As a community, as a group of nations that we are working together, what we need to do is that can we create a framework in which we enable access to compute, to data sets and algorithms to larger countries of the global south? And how to do it? If we can build repositories of AI solutions, like we came up with the global DPI repository, like DPI solutions which can be shared across nations as an outcome of G20 summit that we hosted. Similarly, can we create a repository of AI applications across sectors which can be shared with different countries and they can be adopted and deployed? For example, if you have an AI-based application to diagnose cancer or diagnose tuberculosis or help farmers, they will have use cases across geographies, across countries. Even though one country has developed it, but it can be deployed elsewhere. So, repository of AI-based applications will be one of my wishes that we should work together. And similarly, another thing that is required, like we all talk about data sets, and when we talk about data sets, anonymization, privacy preservation becomes equally important. So, can we develop tools which can be shared across countries? Can we kind of fast forward the development of data sets platform, enabling data sharing within various stakeholders, not only within our own countries, but globally? If we are able to build that framework, Sharad mentioned about the DIPA framework that we can have, so that can have an application for global data sharing protocols, and that would really, really fast forward building AI applications and models across the world. So, I would conclude by saying that these are my wish lists within India about voice-based services and as a global company. building repository of AI applications and tools for enabling data sharing and
Henri Verdier: building applications. Thank you, Abhishek. So I’m supposed to be the moderator so I don’t contribute. Maybe my two cents, I just mentioned that the very, the utmost importance of public research and common knowledge. We need to have a common knowledge for humankind and we need to empower public research too. Not just this, but two. So please, we have three questions if I’m correct. Hello, my name
Audience: is Martina Legal Malakova, I’m president at GAIA-X Hub Slovakia. I’m vice chair at SME committee at business at OECD and I am also MAG 2024. My question is exactly to you, Mr. Henry Verrier. It’s a pity that you are not a speaker today, a very good speaker often. So I don’t know if you heard this idea from Joseph Gordon-Levitt, who shared today about that several companies earn money on our data and they should give back this money or this benefit what they have to us as a citizen and maybe also to SMEs. My question is, do you, in principle I agree with this idea, but I don’t know how to do it because it could change completely all the economy or the system. So my question is, do you have this idea? Do you have how to do it in the world, this model? Thank you. Complex question and I’m not supposed to be a speaker. In a nutshell, we have an
Henri Verdier: experience with social network. They did take some advertising revenue and they did weaken a bit the ecosystem of media. If we had asked those companies to pay us, it would have been, I don’t know, two or three euros a year. That’s not a lot compared to the benefits they do and the negative externalities they generate. So this is, it might… might be useful, and especially that’s why I did mention for the press, for example, for some content producers, it cannot be a global solution to finance the global development of humankind. But that’s an interesting point. Please, second question, for a real speaker.
Audience: It actually follows on from what the last question was asked. So my name is Deanne Hewitt-Mills, and I run a global data protection office consultancy. So essentially, we have responsibility for overseeing data protection, cyber, AI compliance for large multinationals. And I was actually one of the, we’re sort of UK-based, but we’re global. So I’m here in Norway, actually launching our Nordics branch. And I was actually one of the first data protection offices to attain what’s called the B Corp standard. And B Corp, well, as I say, the office is based in the UK. It’s a standard where organizations have demonstrated high levels in ESG, so environmental social governance. And what we have to do is demonstrate that we’re a business that’s a force for good. So you actually have to attest to what you’ve done to make a positive social impact, and actually have a report that sets this out on a yearly basis, and then you get to, you’re renewed every three years. And I’ve done this because I really believe in using business as a force for good. And I think actually it would be a great thing if many other organizations, because I’m not a large tech business with deep pockets, but I’ve seen the social impact that I’ve been able to have. And I think, you know, other organizations could be made to do the same. I’m really pleased to see all the women on this panel, because I think if you actually invest in women and invest in women-owned businesses, and then also have a structure where businesses are required to demonstrate their social impact. I think there’s a lot that can be done to improve governance in this space, so it’s actually just a recommendation based on a real-life case, which is the example of what I have done as a business owner.
Henri Verdier: Thank you. I’m not sure this is a question, but does someone want to answer?
Mariagrazia Squicciarini: I just would like to point to something that you pointed to, perhaps implicitly, and this trust, which is really fundamental for all the business of AI. And also for the data, because if we don’t trust now with all the regulations that we have, that finally try to protect us and say, look, you might want to take yes or no to giving this data, we will get more and more patchy data sets that in order to build AI on is going to be really challenging. We know there was someone talking before about AGI, but for instance, let’s talk about synthetic content. Let’s talk about how to use it in a decent way for a good reason, for instance, to fix patchy data in order to have representative data sets. So it all goes about, in my mind, also the trust that we need to have consensus. You were mentioning that it’s about let’s leverage the technology in the way we want. And again, because it actually does good to technology and the businesses themselves. And that’s what you were actually pointing to. Thank you.
Audience: Hi, I’m Nupur Chunchunwala. I run a foundation that unlocks the potential of neurodiverse individuals globally. We work with governments on this. And today I’ve heard a lot about inclusion and diversity. Unfortunately, it’s only in the context of the South or language and culture. But I think a good reminder is that humans are diverse. We have an aging population over 10 percent that’s going to get impacted. We have, of course, gender. We have ability in terms of disabilities that are coming on and a large population of neurodiverse individuals. Our latest data on Gen Z is 53 percent of Gen Z identify as being neurodiverse. These groups, if not included in the AI revolution, will have a big, you know, we will have a big issue of divide that goes beyond the global south or language. I think, I’m not sure if this is a question or a comment, but how do you include them in the conversations on international cooperation, the SDG goals, impact on children, because AI is also rewiring their brains. We see a lot around the anxious generation and their mental health employability, so.
Henri Verdier: So, I don’t know who wants to answer. Online, someone online?
Mariagrazia Squicciarini: One, two, three. Someone in the room? I don’t want to monopolize this conversation, but it’s true that at UNESCO, we do have a full program about inclusiveness of people with disabilities, from sport to AI, so that it’s addressed from many points of view. Actually, going towards the starting of your questions, it is about neurodiversity. UNESCO, perhaps you don’t know, but UNESCO, in the UN system, is tasked with dealing with the ethics of new technologies, hence the work on AI and what it brought us today. The latest recommendation that has been worked on is about neurotechnologies and the impact they have on rights, on the people, on, again, what society wants them to do or not to do. And the special attention is also put at the crossroad between AI and neurotech, because that’s where the biggest impact may be on societies. So, there are ways of actually bringing into the conversation these different aspects, and when we say inclusivity,
Henri Verdier: we say inclusivity 360 degrees. Thank you. Thank you. Just, I will let Abhishek conclude our work, but just to mention, I will quote you, Minister, but I will quote Thomas too. Thomas spoke about the optimist divide, and I remember you told me once, in the North, you are pretty sure that you will have some benefits from AI, so you try to fix the problems, risk, et cetera. We are not sure so far that we will benefit from AI enough. So that’s maybe the difference, and that’s why we did design this event today regarding this divide. And yes, we know and we respect that there are a lot of other divides, but this one is very important and has to be considered as itself. We have more than three minutes, Abhishek, to thank you for this initiative and to let you conclude our work.
Abhishek Agarwal: No, no. I must thank you all. Thank you especially, Henri, for moderating it so beautifully. In fact, I was initially thinking that we have 10 speakers. How would you manage? But you did it beautifully. You got everyone to contribute. And the thoughtful contribution that came from all of you and especially all the panelists, different perspectives from all over, it was very, very useful, very relevant. And we have given a lot of inputs as we frame the themes for the AI Impact Summit. Over the last 90 minutes, we not only kind of identified the various barriers, various obstacles, what needs to be fixed for moving ahead on the AI story that limits equitable access to AI, but we also found opportunities, identified solutions, identified interventions that can help shape us a future where AI will truly work for everyone. One very important message that also came out in the discussions today, especially with the references to the Friends, the Hiroshima process, the UNESCO efforts, the OECD, the GPA effort, the UN efforts, is that there is an urgent need for inclusive multilateralism, one that listens to and is shaped by experiences of the global majority. How do we ensure that countries of global south also become part of the conversations at every forum, whether we make them, the way the efforts which are made to make GPA more inclusive, the efforts to involve the developing countries in the Hiroshima process, the UNESCO’s work on ethics, or the UN’s efforts to kind of bring together a consensus with the Global Digital Compact Initiative. So this also, we also heard about the importance of addressing the gaps in access to infrastructure, how do we ensure culturally grounded datasets, how do we enable cross-border cooperation, and above all how do we move from high level commitments to real actionable pathways, that becomes very very important. And as we mentioned right in the beginning this event we had planned with IGF and support from ONRI as a precursor to the AI Impact Summit which will host in February 2026 and the ideas we can share today will become part of the themes as I mentioned as we move forward and I look forward to involving you in the in as we develop the concept notes and the themes and we create the sessions. This dialogue will continue through a participatory and transparent process including when we plan the sessions for the main summit, we’ll be doing public consultations, we’ll be doing online meetings, we’ll have a working groups which will work with the in the collaborative spirit and we’ll have an open call for side events, we’ll look forward to various entities whether they’re from the government or civil society or multilateral bodies or important stakeholders who hold side events during the summit. We invite all of you to stay connected, engaged as co-creators for the journey as we plan the summit in February. On behalf of Government of India and the India AI mission I would once again like to thank the IGF Secretariat, the Government of Norway and our distinguished moderator Ambassador Henri Werdier, each of our speakers on Her Excellency the Minister Yochi, Omaria, Amandeep, Sharad, Andre, Audrey for joining us today and making this session so meaningful and rich in substance. We look forward to building on this momentum and seeing you many of seeing most of you at the AI summit when February next year. Thank you and look forward for the remaining sessions of the IGF here. Thank you.
Cina Lawson
Speech speed
136 words per minute
Speech length
1323 words
Speech time
580 seconds
Infrastructure gaps including lack of connectivity, reliable electricity, GPUs, and data centers hinder AI development in Global South
Explanation
Minister Lawson identified infrastructure as the primary barrier to AI development in Togo and Africa. She emphasized that countries are still struggling with basic connectivity and reliable electricity, and lack access to GPUs and data centers necessary for AI development.
Evidence
Mentioned the need for new business models and funding mechanisms to support infrastructure development in the Global South
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Development | Infrastructure
Agreed with
– Abhishek Agarwal
– Tomas Lamanauskas
Agreed on
Three fundamental gaps hinder AI adoption: infrastructure, skills, and data sets
Skills shortage and declining interest in math and science education creates major challenges for AI adoption
Explanation
She highlighted that fewer children on the African continent are choosing to study mathematics and science, creating a significant skills gap. This educational challenge must be addressed to enable effective AI adoption and problem-solving capabilities.
Evidence
Noted the declining enrollment in STEM subjects across Africa as a concrete example of the skills challenge
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Development | Sociocultural
Agreed with
– Abhishek Agarwal
– Tomas Lamanauskas
Agreed on
Three fundamental gaps hinder AI adoption: infrastructure, skills, and data sets
Data sets availability is crucial – countries need relevant local data to build effective AI applications
Explanation
Minister Lawson emphasized that if digital transformation is a challenge, many countries lack the necessary data to apply AI algorithms effectively. Building comprehensive datasets requires numerous projects and significant investment.
Evidence
Provided examples of Togo’s AI use during the pandemic using satellite imagery and mobile telco metadata, demonstrating successful AI applications with available data
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Development | Legal and regulatory
Agreed with
– Abhishek Agarwal
– Tomas Lamanauskas
Agreed on
Three fundamental gaps hinder AI adoption: infrastructure, skills, and data sets
AI platforms designed in foreign languages exclude Global South populations from participation
Explanation
She argued that most AI platforms are designed in languages that are not native to Global South populations. This language barrier prevents meaningful participation and limits the relevance of AI solutions for local communities.
Evidence
Emphasized the need for AI systems designed in local languages to ensure better participation and relevant datasets
Major discussion point
Language and Cultural Representation in AI
Topics
Sociocultural | Development
Agreed with
– Abhishek Agarwal
– Amandeep Singh Gil
Agreed on
Language and cultural representation in AI systems is crucial for Global South inclusion
Cultural representation in AI datasets is crucial for ensuring Global South existence in future AI knowledge systems
Explanation
Minister Lawson expressed concern that if AI represents the totality of knowledge in the future, and Global South cultures are not represented in these platforms, it could mean these cultures effectively don’t exist. This makes cultural representation in AI datasets essential for preserving cultural identity and relevance.
Evidence
Used the hypothetical scenario of someone from another planet looking at AI platforms 20 years from now – if Global South cultures aren’t represented, they won’t exist in that knowledge system
Major discussion point
Language and Cultural Representation in AI
Topics
Sociocultural | Human rights
Agreed with
– Abhishek Agarwal
– Amandeep Singh Gil
Agreed on
Language and cultural representation in AI systems is crucial for Global South inclusion
Research programs and joint funding initiatives are needed to make Global South part of AI solutions
Explanation
She emphasized that the Global South doesn’t want to be just consumers of AI technology but needs to be part of creating solutions. This requires more alliances and programs to fund research within the continent and joint research programs with other countries.
Evidence
Mentioned conversations with India and other countries about sending researchers and funding research programs
Major discussion point
Actionable Solutions and Pathways
Topics
Development | Economic
Agreed with
– Andrea Jacobs
– Mariagrazia Squicciarini
Agreed on
Global South countries must transition from AI consumers to producers and co-creators
Shared infrastructure models and new business cases must be developed for GPU and data center access
Explanation
Given the lack of GPUs and data centers in the Global South, new business models need to be developed that allow for shared infrastructure access. This requires creating viable business cases that take into account the reality of resource constraints.
Evidence
Referenced the need to build new programs and models for sharing infrastructure, particularly in the context of limited GPU and data center availability
Major discussion point
Actionable Solutions and Pathways
Topics
Infrastructure | Economic
Local talent training programs should be established within Global South rather than sending talent abroad
Explanation
Minister Lawson advocated for building effective talent training programs within the Global South rather than relying on sending talent abroad for training. This approach would help build local capacity and retain skilled professionals in their home regions.
Evidence
Emphasized the need for conversations about effective talent training and building models within the continent
Major discussion point
Actionable Solutions and Pathways
Topics
Development | Sociocultural
Abhishek Agarwal
Speech speed
168 words per minute
Speech length
1823 words
Speech time
649 seconds
Compute infrastructure scarcity requires innovative solutions like India’s low-cost GPU sharing model
Explanation
Agarwal explained that while India has strong talent and skills, they faced challenges with compute infrastructure and datasets. India addressed this by making 50,000 GPUs available at very low cost (less than a dollar per GPU per hour) to researchers, academics, and startups.
Evidence
Provided specific details about India’s AI mission making 50,000 GPUs available at less than $1 per GPU per hour for Indian researchers, academicians, and startups
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Infrastructure | Development
Agreed with
– Cina Lawson
– Tomas Lamanauskas
Agreed on
Three fundamental gaps hinder AI adoption: infrastructure, skills, and data sets
Voice-based AI in local languages is essential for including millions outside the digital ecosystem
Explanation
He emphasized that voice-based large language models (LLMs) in mother tongues are crucial for empowering people who are currently outside the digital ecosystem. When services can be accessed through voice commands in local languages, it can bring millions into the digital fold.
Evidence
Mentioned India’s Bhashini project for natural language processing and voice-based LLMs in all Indian languages
Major discussion point
Language and Cultural Representation in AI
Topics
Sociocultural | Development
Agreed with
– Cina Lawson
– Amandeep Singh Gil
Agreed on
Language and cultural representation in AI systems is crucial for Global South inclusion
Repository of AI applications across sectors should be created for sharing between countries
Explanation
Agarwal proposed creating repositories of AI applications similar to the global DPI repository developed during India’s G20 summit. These applications, such as AI-based cancer diagnosis or agricultural tools, could have use cases across different geographies and countries.
Evidence
Referenced the global DPI repository created as an outcome of the G20 summit India hosted, and gave examples of AI applications for cancer diagnosis, tuberculosis diagnosis, and farmer assistance
Major discussion point
Actionable Solutions and Pathways
Topics
Development | Economic
Global data sharing protocols and anonymization tools need development for cross-border collaboration
Explanation
He emphasized the need for tools that can be shared across countries for anonymization and privacy preservation when building datasets. This includes developing platforms that enable data sharing among stakeholders while maintaining privacy and security.
Evidence
Mentioned India’s DEPA (Data Empowerment and Protection Architecture) framework and referenced it being mentioned at the Paris AI Summit
Major discussion point
Actionable Solutions and Pathways
Topics
Legal and regulatory | Human rights
Tomas Lamanauskas
Speech speed
185 words per minute
Speech length
1804 words
Speech time
582 seconds
Digital infrastructure disparities are stark – Africa has only 1.8% of global data centers despite 18% of population
Explanation
Lamanauskas highlighted the severe infrastructure gap by providing specific statistics showing the disproportionate distribution of data centers globally. This disparity demonstrates the scale of the infrastructure challenge facing the Global South.
Evidence
Provided specific statistics: Africa has around 1.8% of global data centers while having more than 18% of the global population
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Infrastructure | Development
Agreed with
– Cina Lawson
– Abhishek Agarwal
Agreed on
Three fundamental gaps hinder AI adoption: infrastructure, skills, and data sets
Policy gaps exist with 55% of countries lacking AI strategies and 85% without regulatory frameworks
Explanation
He identified significant policy and regulatory gaps as barriers to AI development. The lack of proper policies and regulatory environments creates challenges for countries trying to develop their AI capabilities and governance structures.
Evidence
Cited ITU surveys showing 55% of countries don’t have proper AI policies or strategies, and 85% lack appropriate regulatory environments
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Legal and regulatory | Development
Trust divide shows 60% of people globally have AI trust issues
Explanation
Lamanauskas highlighted that trust in AI is a global challenge, with approximately 60% of people worldwide having concerns about AI. This trust gap affects AI adoption and acceptance across both developed and developing countries.
Evidence
Provided the statistic that around 60% of people around the world have issues with AI trust
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Sociocultural | Human rights
Innovation capabilities are concentrated in few countries as shown by patent distribution
Explanation
He pointed out that when looking at patents as a measure of innovation, only two countries dominate the AI innovation landscape. This concentration of innovative capabilities creates barriers for other countries trying to develop their own AI innovations and companies.
Evidence
Referenced patent distribution data showing two countries dominating AI innovation by significant percentages
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Economic | Legal and regulatory
Global South shows optimism divide with 70% viewing AI as helpful versus developed countries’ job displacement fears
Explanation
Lamanauskas identified an interesting paradox where people in developed countries are more fearful of AI taking their jobs (around 70%), while people in the Global South are more optimistic, with 70% believing AI will help them and their economies. This creates a ready population for AI adoption if other barriers are addressed.
Evidence
Provided statistics showing 70% of people in Europe and developed countries fear AI will take jobs, while 70% in Global South believe AI will help develop their economies
Major discussion point
Economic and Innovation Models
Topics
Economic | Sociocultural
AI for Good Global Summit provides platform for inclusive governance discussions with developing countries
Explanation
He highlighted ITU’s AI for Good Global Summit as an established platform that brings stakeholders together and has been running since 2017. The summit includes governance discussions and has seen increasing participation from developing countries, with some policymakers participating in AI governance discussions for the first time.
Evidence
Mentioned the summit has been running since 2017, had around 70 countries participate in governance discussions last year, with some saying it was their first time in AI governance discussions
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Legal and regulatory | Development
Agreed with
– Yoichi Iida
– Amandeep Singh Gil
– Audrey Plonk
Agreed on
Multi-stakeholder cooperation and inclusive governance frameworks are essential
Amandeep Singh Gil
Speech speed
140 words per minute
Speech length
816 words
Speech time
348 seconds
Language data sets are concentrated in only six or seven languages, missing cultural contexts
Explanation
Gil pointed out that most AI datasets are limited to a small number of languages and reflect very specific cultural contexts, primarily from North America and Western Europe. This creates a significant gap in representation for the majority of the world’s languages and cultures.
Evidence
Noted that most language datasets are in six or seven languages with cultural context specific to North American and Western European contexts
Major discussion point
Language and Cultural Representation in AI
Topics
Sociocultural | Human rights
Agreed with
– Cina Lawson
– Abhishek Agarwal
Agreed on
Language and cultural representation in AI systems is crucial for Global South inclusion
Global Digital Compact established international scientific panel on AI and global dialogue on AI governance
Explanation
Gil explained that the Global Digital Compact led to key decisions including setting up an international independent scientific panel for regular AI assessments and establishing a regular global dialogue on AI governance within the UN. These mechanisms provide sustained, inclusive platforms for AI governance discussions.
Evidence
Referenced the Global Digital Compact adoption and the work of the high-level advisory body on AI that led to these institutional decisions
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Legal and regulatory | Development
Agreed with
– Yoichi Iida
– Tomas Lamanauskas
– Audrey Plonk
Agreed on
Multi-stakeholder cooperation and inclusive governance frameworks are essential
AI capacity building requires innovative financing options to address the AI divide
Explanation
He mentioned that the Global Digital Compact asked for a report on innovative financing options for AI capacity building. This report, based on nearly 200 consultations, will provide governments and other actors with frameworks for investing in compute, data, talent development, and shareable use cases.
Evidence
Referenced a draft report based on nearly 200 consultations across the UN system that will be presented in September, covering aspects like compute, data, talent development, and shareable open use cases
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Development | Economic
Standards development needs regular engagement and clearing house approach for coherent soft regulation
Explanation
Gil emphasized the importance of building standards in various AI areas and having regular engagement to create a more coherent and impactful set of soft regulations. This includes work on AI safety institutes and children’s safety standards.
Evidence
Mentioned AI safety institutes started at Bletchley Park, children’s safety considerations, and the need for industry and tech community benefit
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Legal and regulatory | Human rights
Audrey Plonk
Speech speed
167 words per minute
Speech length
1151 words
Speech time
412 seconds
Institutional and capacity divides limit countries’ ability to participate in global AI discussions
Explanation
Plonk identified institutional divides and capacity constraints as significant barriers preventing countries from participating effectively in AI governance and policy discussions. This includes the ability of governments to develop AI policies and strategies and participate in international dialogues.
Evidence
Referenced OECD’s database covering over 72 jurisdictions and noted there’s room for improvement in helping countries develop policies and learn from each other
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Legal and regulatory | Development
Global Partnership on AI expansion aims to include more countries at different AI development levels
Explanation
She explained that GPAI, now part of the OECD, is working to expand and include a larger set of countries that are at various levels of AI development. This expansion aims to bring more diverse perspectives to the table for collaborative work on different AI topics.
Evidence
Mentioned GPAI’s announcement last year in India about becoming part of OECD and the hope for expansion to include countries at different AI development levels
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Legal and regulatory | Development
Agreed with
– Yoichi Iida
– Amandeep Singh Gil
– Tomas Lamanauskas
Agreed on
Multi-stakeholder cooperation and inclusive governance frameworks are essential
Financial divides limit SME engagement in AI development and deployment
Explanation
Plonk highlighted that financial barriers and investment gaps in AI create challenges for small and medium enterprises, which are crucial for the global economy, to engage meaningfully in the AI ecosystem. This affects the diversity of actors in AI development.
Evidence
Referenced the reality of financial divides in terms of investment in AI and the importance of SMEs as the lifeblood of the global economy
Major discussion point
Economic and Innovation Models
Topics
Economic | Development
Andrea Jacobs
Speech speed
140 words per minute
Speech length
697 words
Speech time
297 seconds
Small developing states face weak data ecosystems and limited regulatory capacity
Explanation
Jacobs explained that Caribbean and small developing states face structural barriers including unreliable connectivity, particularly in rural and outer island areas, weak data ecosystems that limit context-relevant AI development, and limited regulatory and technical capacity for safe AI use.
Evidence
Specifically mentioned unreliable connectivity in rural areas and outer island areas, and weak data ecosystems limiting ability to develop context-relevant AI
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Infrastructure | Legal and regulatory
Caribbean and Global South are primarily consumers of AI technologies built elsewhere without their contexts in mind
Explanation
She emphasized that there’s a major imbalance where Global South countries are overwhelmingly consumers of AI technologies developed elsewhere, often without consideration for their realities, languages, or priorities. This creates risks from biased tools not built for their populations.
Evidence
Noted that AI companies often don’t consider people in the Global South or black and brown people when developing products, leading to embedded biases
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Economic | Human rights
Agreed with
– Cina Lawson
– Mariagrazia Squicciarini
Agreed on
Global South countries must transition from AI consumers to producers and co-creators
Global South must use collective voice to advocate for inclusive AI governance frameworks
Explanation
Jacobs argued that until Global South countries progress from being primarily consumers to producers of AI, they must leverage their collective strength and voices to advocate for inclusive, transparent, and accountable AI governance frameworks as equal partners in shaping the digital future.
Evidence
Emphasized the need for Global South to have ‘a seat at the table’ in AI meetings and to be part of co-creating values, rules, and technologies
Major discussion point
Actionable Solutions and Pathways
Topics
Legal and regulatory | Human rights
Agreed with
– Cina Lawson
– Mariagrazia Squicciarini
Agreed on
Global South countries must transition from AI consumers to producers and co-creators
Local data ecosystems and data rights need development alongside skills training
Explanation
She advocated for developing and investing in local data ecosystems and data rights as essential components of building AI capacity. This should be coupled with ensuring people have the knowledge and skills to retool and upskill for the AI era.
Major discussion point
Actionable Solutions and Pathways
Topics
Legal and regulatory | Development
Yoichi Iida
Speech speed
113 words per minute
Speech length
748 words
Speech time
395 seconds
Aging populations face unique challenges requiring trust and literacy in AI technology
Explanation
Iida explained that Japan faces unique challenges with a rapidly aging society and decreasing population, making it essential to use AI technology to maintain societal energy and liveliness. For this to work, trust in technology and skills/literacy among the population are crucial elements.
Evidence
Referenced Japan’s rapidly aging society and decreasing population as specific demographic challenges requiring AI solutions
Major discussion point
Focus on Vulnerable Populations and Inclusion
Topics
Sociocultural | Development
Hiroshima Process promotes AI company risk assessment and information sharing to foster trust
Explanation
He described Japan’s Hiroshima Process initiative that encourages AI companies to assess risks and challenges in their AI models, take appropriate measures, and share relevant information with the public openly. This approach aims to foster trust in AI technology among people.
Evidence
Mentioned Japan’s recent AI law enacted at the end of the previous month, which is called an AI promotion law rather than regulation law
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Legal and regulatory | Sociocultural
Disagreed with
– Henri Verdier
Disagreed on
Approach to AI regulation – promotion vs. risk management
Co-governance involving governments, businesses, civil society and academia is needed for trustworthy AI ecosystem
Explanation
Iida emphasized the need for collaborative governance that brings together all stakeholders – governments, businesses, civil society, and academia – to achieve a safe, secure, and trustworthy AI ecosystem across the world. This multi-stakeholder approach is essential for effective AI governance.
Evidence
Referenced the need for coherent and interoperable governance frameworks across regions and countries to enable safe use of AI technologies from abroad
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Legal and regulatory | Sociocultural
Agreed with
– Amandeep Singh Gil
– Tomas Lamanauskas
– Audrey Plonk
Agreed on
Multi-stakeholder cooperation and inclusive governance frameworks are essential
Multi-stakeholder approach through forums like GPAI can create AI services responding to concrete user demands
Explanation
He argued that materializing AI applications based on concrete demands and necessities of people requires working in a multi-stakeholder way. Forums like the Global Partnership on AI provide opportunities to realize this approach and create AI services that respond to individual user needs.
Evidence
Mentioned GPAI, India Summit, and Hiroshima Process Friends Group as examples of forums where multi-stakeholder approaches can be realized
Major discussion point
Actionable Solutions and Pathways
Topics
Legal and regulatory | Development
Mariagrazia Squicciarini
Speech speed
174 words per minute
Speech length
1722 words
Speech time
593 seconds
Within-country inequalities including generational divides need attention alongside global disparities
Explanation
Squicciarini emphasized that AI inequalities exist not just between developed and developing countries, but also within countries. She highlighted generational divides as particularly important, noting that it’s different to deal with AI when you have a young population versus an aging one.
Evidence
Referenced current G20 discussions about within-country inequalities and used the example of generational differences in AI adoption and skill development
Major discussion point
Focus on Vulnerable Populations and Inclusion
Topics
Sociocultural | Human rights
Inclusive AI benefits everyone by improving system performance through better, more representative data
Explanation
She argued against the false notion that inclusion only benefits the included, explaining that biased data and lack of diverse representation actually makes AI systems perform worse. Including more actors, languages, and communities creates better-performing AI systems that benefit everyone.
Evidence
Explained that biased data and missing languages/communities lead to poor performance in ‘wild testing’ scenarios, making AI less fit for purpose
Major discussion point
Focus on Vulnerable Populations and Inclusion
Topics
Human rights | Economic
Current AI innovation concentration in few companies limits breakthrough innovation potential from startups
Explanation
Squicciarini pointed out that while AI is currently dominated by large corporations from certain countries, there are many startups trying to scale up but finding it difficult. This concentration limits innovation potential since breakthrough innovations typically come from young and small entities.
Evidence
Referenced research showing that radical innovation and breakthrough innovations tend to come from young and small entities
Major discussion point
Economic and Innovation Models
Topics
Economic | Development
Agreed with
– Cina Lawson
– Andrea Jacobs
Agreed on
Global South countries must transition from AI consumers to producers and co-creators
Disagreed with
– Sharad Sharma
Disagreed on
Innovation model emphasis – public vs. private sector balance
UNESCO’s AI ethics framework and readiness assessment methodology help countries evaluate their AI preparedness
Explanation
She described UNESCO’s readiness assessment methodology as a tool that gives countries a comprehensive picture of their AI preparedness without ranking them. This approach recognizes that no country has perfect solutions yet, but there are good practices that can be shared globally.
Evidence
Mentioned working with India to finalize their readiness assessment and emphasized that it’s not a ranking system but a comprehensive analysis tool
Major discussion point
Multilateral Cooperation and Governance Frameworks
Topics
Legal and regulatory | Development
Ethical-by-design approach should replace problem-fixing approach for better AI implementation
Explanation
Squicciarini advocated for moving from fixing problems after they occur to having ethical design from the beginning. This means ensuring AI systems abide by human rights, human dignity, and fundamental freedoms from the design stage, making implementation much easier.
Major discussion point
Actionable Solutions and Pathways
Topics
Human rights | Legal and regulatory
Sharad Sharma
Speech speed
149 words per minute
Speech length
1252 words
Speech time
503 seconds
Public infrastructure for controlled data sharing is essential for global AI development
Explanation
Sharma emphasized the need for a new type of public infrastructure that enables controlled data sharing between companies and countries while allowing them to maintain control. This was identified as recommendation number six from the UN AI advisory body report on global training data sharing framework.
Evidence
Referenced UN AI advisory body recommendation number six on global training data sharing framework and mentioned India’s advanced work on DEPA (Data Empowerment and Protection Architecture)
Major discussion point
Actionable Solutions and Pathways
Topics
Legal and regulatory | Infrastructure
Techno-legal regulation must replace traditional regulation to prevent gaming by AI service providers
Explanation
He argued that traditional regulation can be easily gamed by producers of digital and AI services, as has happened over the past 10-15 years. A new regulatory paradigm called techno-legal regulation is essential to address this challenge effectively.
Evidence
Referenced India’s Prime Minister discussing techno-legal regulation at the AI Summit and mentioned learnings from India’s experience that are available to other countries
Major discussion point
Actionable Solutions and Pathways
Topics
Legal and regulatory | Infrastructure
Innovation architecture should combine public goods with private innovation rather than purely private sector approach
Explanation
Sharma emphasized that innovation must be built on both public goods and private innovation. Without public goods, countries become merely consumers rather than producers, missing out on value capture even when they benefit from AI applications like better education or healthcare.
Evidence
Used India as an example, noting that while India might have better AI-powered teachers and doctors, the value capture would not be in India if the innovation is purely private
Major discussion point
Actionable Solutions and Pathways
Topics
Economic | Development
Disagreed with
– Mariagrazia Squicciarini
Disagreed on
Innovation model emphasis – public vs. private sector balance
Child safety should be global priority given AI’s potential destructive effects on young adults
Explanation
Sharma argued that while AI can enhance young adults’ capabilities as students, it also has enormous potential for destructive effects including loss of cultural moorings, distraction by pornography, and gambling/gaming addiction. Child safety should be a global focus, not just for India or the Global South.
Evidence
Mentioned specific risks including cultural displacement, pornography, gambling, and gaming as concerns for children using AI systems
Major discussion point
Focus on Vulnerable Populations and Inclusion
Topics
Human rights | Sociocultural
Cultural preservation for children using AI systems is important for national sovereignty
Explanation
He emphasized the importance of each country perpetuating its culture among young children who will be living digitally and using AI systems. This cultural preservation aspect is crucial from a sovereignty perspective as children increasingly interact with AI systems.
Evidence
Referenced the focus on young adults in India’s AI strategy and the concern about cultural moorings in digital environments
Major discussion point
Focus on Vulnerable Populations and Inclusion
Topics
Sociocultural | Human rights
Henri Verdier
Speech speed
146 words per minute
Speech length
1538 words
Speech time
630 seconds
Innovation is not always progress and progress is not always for everyone, requiring focus on ensuring AI benefits the global majority
Explanation
Verdier emphasized that technological innovation doesn’t automatically translate to progress for all people. With the AI revolution, there’s a critical need to ensure benefits reach emerging economies and the vast majority of humankind, not just a privileged few.
Evidence
Referenced the context of AI summits from Bletchley Park (existential risk) to Paris (innovation, governance, environmental impacts) to Delhi (development, inclusion, benefit for everyone)
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Development | Human rights
History shows brilliant technologies can fail to become tools for progress, requiring proactive measures to ensure AI serves humanity
Explanation
Verdier warned that history provides examples of great technologies that weren’t used optimally – television could have been brilliant for education but didn’t become a real educational resource, and GMOs could have solved agricultural problems in tropical areas but didn’t. This historical perspective suggests we need to think more deeply about ensuring AI becomes a force for good.
Evidence
Provided specific historical examples of television’s unrealized educational potential and genetically modified organisms’ missed opportunities in tropical agriculture
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Development | Sociocultural
Public research and common knowledge for humankind are of utmost importance for inclusive AI development
Explanation
Verdier emphasized the critical need to empower public research and create common knowledge that belongs to all humanity. This approach is essential for ensuring AI development serves broader societal interests rather than just private commercial interests.
Major discussion point
Actionable Solutions and Pathways
Topics
Development | Economic
The Global South’s optimism about AI stems from uncertainty about benefits, unlike the North’s focus on risk management
Explanation
Verdier identified a key difference in AI perspectives: the Global North is relatively confident about receiving AI benefits and focuses on managing risks, while the Global South is not yet sure they will benefit sufficiently from AI. This creates different priorities and approaches to AI governance and development.
Evidence
Referenced Minister Lawson and Thomas Lamanauskas’s discussion about the optimism divide
Major discussion point
AI Divide and Barriers to Equitable AI Adoption
Topics
Economic | Sociocultural
Disagreed with
– Yoichi Iida
Disagreed on
Approach to AI regulation – promotion vs. risk management
Audience
Speech speed
154 words per minute
Speech length
672 words
Speech time
260 seconds
AI companies should compensate citizens for using their data, though implementation challenges exist
Explanation
An audience member suggested that companies earning money from user data should provide financial compensation to citizens and SMEs. While agreeing with the principle, they acknowledged uncertainty about implementation as it could fundamentally change economic systems.
Evidence
Referenced Joseph Gordon-Levitt’s idea shared earlier about companies giving back benefits from data use
Major discussion point
Economic and Innovation Models
Topics
Economic | Human rights
B Corp certification demonstrates how businesses can be forces for good in AI governance
Explanation
An audience member shared their experience as a data protection officer who achieved B Corp certification, which requires demonstrating high levels of environmental, social, and governance standards. They advocated for requiring organizations to demonstrate positive social impact and suggested investing in women and women-owned businesses as part of improving AI governance.
Evidence
Provided personal example of running a global data protection consultancy that achieved B Corp standard and reports yearly on social impact
Major discussion point
Actionable Solutions and Pathways
Topics
Economic | Human rights
Neurodiversity and disability inclusion must be part of AI development conversations beyond geographic and cultural divides
Explanation
An audience member emphasized that human diversity extends beyond geographic, language, and cultural differences to include aging populations, gender, disabilities, and neurodiversity. They highlighted that 53% of Gen Z identify as neurodiverse and warned that excluding these groups from AI development would create significant divides beyond the Global South focus.
Evidence
Cited statistic that 53% of Gen Z identify as neurodiverse and referenced concerns about AI’s impact on mental health and the ‘anxious generation’
Major discussion point
Focus on Vulnerable Populations and Inclusion
Topics
Human rights | Sociocultural
Agreements
Agreement points
Three fundamental gaps hinder AI adoption: infrastructure, skills, and data sets
Speakers
– Cina Lawson
– Abhishek Agarwal
– Tomas Lamanauskas
Arguments
Infrastructure gaps including lack of connectivity, reliable electricity, GPUs, and data centers hinder AI development in Global South
Skills shortage and declining interest in math and science education creates major challenges for AI adoption
Data sets availability is crucial – countries need relevant local data to build effective AI applications
Compute infrastructure scarcity requires innovative solutions like India’s low-cost GPU sharing model
Digital infrastructure disparities are stark – Africa has only 1.8% of global data centers despite 18% of population
Summary
Multiple speakers identified the same three core barriers to equitable AI adoption: inadequate infrastructure (connectivity, electricity, GPUs, data centers), skills shortages (particularly in STEM education), and lack of relevant datasets. This represents a clear consensus on the fundamental challenges.
Topics
Infrastructure | Development | Sociocultural
Language and cultural representation in AI systems is crucial for Global South inclusion
Speakers
– Cina Lawson
– Abhishek Agarwal
– Amandeep Singh Gil
Arguments
AI platforms designed in foreign languages exclude Global South populations from participation
Cultural representation in AI datasets is crucial for ensuring Global South existence in future AI knowledge systems
Voice-based AI in local languages is essential for including millions outside the digital ecosystem
Language data sets are concentrated in only six or seven languages, missing cultural contexts
Summary
Speakers agreed that AI systems must incorporate local languages and cultural contexts to be truly inclusive. They emphasized that current AI systems are predominantly designed in a few languages with Western cultural contexts, excluding the Global South.
Topics
Sociocultural | Human rights | Development
Global South countries must transition from AI consumers to producers and co-creators
Speakers
– Cina Lawson
– Andrea Jacobs
– Mariagrazia Squicciarini
Arguments
Research programs and joint funding initiatives are needed to make Global South part of AI solutions
Caribbean and Global South are primarily consumers of AI technologies built elsewhere without their contexts in mind
Global South must use collective voice to advocate for inclusive AI governance frameworks
Current AI innovation concentration in few companies limits breakthrough innovation potential from startups
Summary
There was strong agreement that Global South countries cannot remain merely consumers of AI technology but must become active participants in AI development, governance, and innovation to ensure their needs and perspectives are represented.
Topics
Economic | Development | Human rights
Multi-stakeholder cooperation and inclusive governance frameworks are essential
Speakers
– Yoichi Iida
– Amandeep Singh Gil
– Tomas Lamanauskas
– Audrey Plonk
Arguments
Co-governance involving governments, businesses, civil society and academia is needed for trustworthy AI ecosystem
Global Digital Compact established international scientific panel on AI and global dialogue on AI governance
AI for Good Global Summit provides platform for inclusive governance discussions with developing countries
Global Partnership on AI expansion aims to include more countries at different AI development levels
Summary
Speakers consistently emphasized the need for inclusive, multi-stakeholder approaches to AI governance that bring together governments, businesses, civil society, and academia, with particular attention to including developing countries in these discussions.
Topics
Legal and regulatory | Development
Similar viewpoints
Both speakers argued against purely private sector-driven AI development, emphasizing that public goods and inclusive approaches actually benefit everyone, including those already advantaged, by creating better-performing systems.
Speakers
– Sharad Sharma
– Mariagrazia Squicciarini
Arguments
Innovation architecture should combine public goods with private innovation rather than purely private sector approach
Inclusive AI benefits everyone by improving system performance through better, more representative data
Topics
Economic | Development
Both speakers emphasized that AI inequalities are multifaceted, affecting not just Global South countries but also specific populations within countries, including marginalized communities and different demographic groups.
Speakers
– Andrea Jacobs
– Mariagrazia Squicciarini
Arguments
Caribbean and Global South are primarily consumers of AI technologies built elsewhere without their contexts in mind
Within-country inequalities including generational divides need attention alongside global disparities
Topics
Human rights | Sociocultural
Both speakers identified a paradoxical ‘optimism divide’ where Global South populations are more optimistic about AI’s potential benefits while developed countries focus more on managing AI risks and job displacement concerns.
Speakers
– Tomas Lamanauskas
– Henri Verdier
Arguments
Global South shows optimism divide with 70% viewing AI as helpful versus developed countries’ job displacement fears
The Global South’s optimism about AI stems from uncertainty about benefits, unlike the North’s focus on risk management
Topics
Economic | Sociocultural
Unexpected consensus
Trust and social acceptance as critical barriers to AI adoption
Speakers
– Yoichi Iida
– Tomas Lamanauskas
– Mariagrazia Squicciarini
Arguments
Aging populations face unique challenges requiring trust and literacy in AI technology
Trust divide shows 60% of people globally have AI trust issues
Inclusive AI benefits everyone by improving system performance through better, more representative data
Explanation
While much discussion focused on technical and infrastructure barriers, there was unexpected consensus that trust and social acceptance are equally critical challenges. This was surprising given the technical focus of many speakers’ backgrounds.
Topics
Sociocultural | Human rights
Child safety and protection should be a global AI priority
Speakers
– Sharad Sharma
– Audience
Arguments
Child safety should be global priority given AI’s potential destructive effects on young adults
Neurodiversity and disability inclusion must be part of AI development conversations beyond geographic and cultural divides
Explanation
The emergence of child safety and protection of vulnerable populations as a priority was unexpected in a discussion primarily focused on Global South development challenges, showing broader consensus on protecting vulnerable groups.
Topics
Human rights | Sociocultural
Need for new regulatory paradigms beyond traditional approaches
Speakers
– Sharad Sharma
– Mariagrazia Squicciarini
– Yoichi Iida
Arguments
Techno-legal regulation must replace traditional regulation to prevent gaming by AI service providers
Ethical-by-design approach should replace problem-fixing approach for better AI implementation
Hiroshima Process promotes AI company risk assessment and information sharing to foster trust
Explanation
There was unexpected consensus that traditional regulatory approaches are insufficient for AI governance, with speakers from different regions agreeing on the need for innovative regulatory paradigms that combine technical and legal approaches.
Topics
Legal and regulatory | Infrastructure
Overall assessment
Summary
The discussion revealed remarkably strong consensus on fundamental challenges (infrastructure, skills, data), the need for inclusive governance, and the importance of moving Global South countries from consumers to producers of AI technology. There was also unexpected agreement on trust issues, child safety, and the need for new regulatory approaches.
Consensus level
High level of consensus with significant implications for AI governance. The agreement suggests a clear pathway forward focusing on: 1) Addressing the three fundamental gaps through innovative financing and sharing mechanisms, 2) Ensuring language and cultural representation in AI systems, 3) Creating inclusive multi-stakeholder governance frameworks, and 4) Developing new regulatory paradigms that combine technical and legal approaches. This consensus provides a strong foundation for coordinated international action on AI inclusion.
Differences
Different viewpoints
Approach to AI regulation – promotion vs. risk management
Speakers
– Yoichi Iida
– Henri Verdier
Arguments
Hiroshima Process promotes AI company risk assessment and information sharing to foster trust
The Global South’s optimism about AI stems from uncertainty about benefits, unlike the North’s focus on risk management
Summary
Iida advocates for Japan’s promotion-focused AI law and trust-building approach, while Verdier highlights the fundamental difference in perspectives between Global North (risk-focused) and Global South (benefit-focused) approaches to AI governance
Topics
Legal and regulatory | Sociocultural
Innovation model emphasis – public vs. private sector balance
Speakers
– Sharad Sharma
– Mariagrazia Squicciarini
Arguments
Innovation architecture should combine public goods with private innovation rather than purely private sector approach
Current AI innovation concentration in few companies limits breakthrough innovation potential from startups
Summary
Sharma emphasizes the need for public goods infrastructure to prevent countries from becoming mere consumers, while Squicciarini focuses on supporting small entities and startups within the existing private sector framework
Topics
Economic | Development
Unexpected differences
Scope of inclusion priorities
Speakers
– Multiple speakers
– Audience
Arguments
Focus on Global South inclusion and geographic divides
Neurodiversity and disability inclusion must be part of AI development conversations beyond geographic and cultural divides
Explanation
While the panel focused heavily on Global South inclusion, an audience member challenged this narrow focus by highlighting that human diversity includes neurodiversity, disabilities, and generational differences that cut across geographic boundaries. This created tension between geographic-focused inclusion and broader human diversity considerations
Topics
Human rights | Sociocultural
Historical perspective on technology adoption
Speakers
– Henri Verdier
– Tomas Lamanauskas
Arguments
History shows brilliant technologies can fail to become tools for progress, requiring proactive measures to ensure AI serves humanity
Global South shows optimism divide with 70% viewing AI as helpful versus developed countries’ job displacement fears
Explanation
Verdier’s pessimistic historical view of technology adoption (citing TV and GMOs as missed opportunities) contrasts with Lamanauskas’s optimistic observation about Global South readiness to adopt AI. This unexpected disagreement reveals different philosophical approaches to technology’s potential
Topics
Development | Sociocultural
Overall assessment
Summary
The discussion showed remarkable consensus on identifying problems (infrastructure gaps, skills shortages, data availability) but revealed subtle disagreements on solutions and approaches. Key tensions emerged around regulatory philosophy (promotion vs. risk management), innovation models (public vs. private sector emphasis), and the scope of inclusion priorities.
Disagreement level
Low to moderate disagreement level with high implications. While speakers largely agreed on problems and goals, their different solution approaches reflect deeper philosophical and strategic differences that could significantly impact policy directions. The consensus on problems but divergence on solutions suggests the need for more nuanced, multi-faceted approaches that can accommodate different regional priorities and governance philosophies.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers argued against purely private sector-driven AI development, emphasizing that public goods and inclusive approaches actually benefit everyone, including those already advantaged, by creating better-performing systems.
Speakers
– Sharad Sharma
– Mariagrazia Squicciarini
Arguments
Innovation architecture should combine public goods with private innovation rather than purely private sector approach
Inclusive AI benefits everyone by improving system performance through better, more representative data
Topics
Economic | Development
Both speakers emphasized that AI inequalities are multifaceted, affecting not just Global South countries but also specific populations within countries, including marginalized communities and different demographic groups.
Speakers
– Andrea Jacobs
– Mariagrazia Squicciarini
Arguments
Caribbean and Global South are primarily consumers of AI technologies built elsewhere without their contexts in mind
Within-country inequalities including generational divides need attention alongside global disparities
Topics
Human rights | Sociocultural
Both speakers identified a paradoxical ‘optimism divide’ where Global South populations are more optimistic about AI’s potential benefits while developed countries focus more on managing AI risks and job displacement concerns.
Speakers
– Tomas Lamanauskas
– Henri Verdier
Arguments
Global South shows optimism divide with 70% viewing AI as helpful versus developed countries’ job displacement fears
The Global South’s optimism about AI stems from uncertainty about benefits, unlike the North’s focus on risk management
Topics
Economic | Sociocultural
Takeaways
Key takeaways
There is a significant AI divide between developed and developing countries, with the Global South facing barriers in infrastructure, skills, and data access that risk excluding them from AI benefits
Three critical gaps hinder equitable AI adoption: infrastructure (connectivity, electricity, GPUs, data centers), skills (declining STEM education, lack of AI literacy), and data sets (absence of locally relevant, culturally appropriate data)
Language and cultural representation in AI systems is essential – current AI platforms are predominantly designed in a few languages and reflect limited cultural contexts, potentially erasing Global South presence from future AI knowledge systems
An ‘optimism divide’ exists where Global South populations (70%) view AI as helpful for development, while developed countries fear job displacement, suggesting readiness for AI adoption if barriers are addressed
Multilateral cooperation through existing frameworks (UN Global Digital Compact, GPAI, UNESCO ethics guidelines, ITU AI for Good) provides foundation for inclusive AI governance, but needs strengthening
Innovation architecture must shift from purely private sector-driven to combining public goods with private innovation to ensure equitable value capture and prevent Global South from remaining mere consumers
Child safety and protection of young adults should be a global priority given AI’s potential for both empowerment and harm to cultural identity and development
Traditional regulation is inadequate for AI governance – techno-legal regulation approaches are needed to prevent gaming by AI service providers
Resolutions and action items
Create a global repository of AI applications across sectors (healthcare, agriculture, education) that can be shared and adapted by different countries, similar to the Digital Public Infrastructure repository
Develop global data sharing protocols and anonymization tools to enable cross-border collaboration while preserving privacy and control
Establish public infrastructure for controlled data sharing (DEPA framework) that enables data empowerment and protection simultaneously
Build shared infrastructure models and new business cases for GPU and data center access in the Global South
Develop local talent training programs within Global South countries rather than relying on sending talent abroad for training
Expand participation in existing multilateral frameworks (GPAI, Hiroshima Process, UNESCO initiatives) to include more developing countries
Focus AI development on voice-based services in local languages to include populations outside the digital ecosystem
Implement ethical-by-design approaches rather than problem-fixing approaches for AI development and deployment
Establish regular scientific assessments through the international scientific panel on AI as mandated by the Global Digital Compact
Continue dialogue through participatory processes including public consultations, working groups, and open calls for the February 2025 AI Impact Summit in India
Unresolved issues
How to finance the massive infrastructure investments needed to bridge the AI divide in developing countries
Specific mechanisms for ensuring Global South countries transition from AI consumers to producers and co-creators
How to balance AI safety and risk management with the urgent need for AI access and development in the Global South
Concrete implementation details for global data sharing frameworks while respecting national sovereignty and privacy concerns
How to address within-country inequalities (generational, gender, disability, neurodiversity) alongside global disparities
Specific business models and financing mechanisms for shared AI infrastructure that are sustainable and scalable
How to ensure cultural preservation and representation in AI systems as they become more pervasive
Measurement and evaluation frameworks to track progress on inclusive AI adoption and impact
How to prevent AI from exacerbating existing inequalities while harnessing its potential for development
Coordination mechanisms between multiple multilateral initiatives to avoid duplication and ensure coherent global approach
Suggested compromises
Recognize that different regions have different AI priorities – developed countries focus on risk management while Global South focuses on access and development benefits
Combine global standards development with local adaptation to respect cultural contexts while maintaining interoperability
Balance public goods approach with private sector innovation through hybrid models that ensure equitable value distribution
Use existing multilateral frameworks as building blocks rather than creating entirely new governance structures
Focus on practical, implementable solutions (voice-based AI, shared repositories) while working toward longer-term systemic changes
Acknowledge that Global South countries may need to remain consumers initially while building pathways to become producers over time
Integrate AI governance with broader digital transformation and development agendas rather than treating as separate issue
Combine top-down policy frameworks with bottom-up innovation and local problem-solving approaches
Thought provoking comments
If we are not part of the conversation, we won’t exist in the future. One fear that we have is that imagine the world 20 years from now. And if AI represent the totality of knowledge, if you’re not part of this knowledge, people, if someone coming from I don’t know which planet 20 years from now, looking at the data on the platform, if we don’t exist on this platform, it will mean that we don’t exist at all.
Speaker
Cina Lawson (Minister for Digital Economy and Transformation of Togo)
Reason
This comment reframes the AI divide from a technical challenge to an existential threat. It introduces the profound concept that exclusion from AI systems could lead to cultural and societal erasure, elevating the stakes beyond economic disadvantage to questions of survival and representation in human knowledge.
Impact
This comment fundamentally shifted the discussion’s urgency and philosophical depth. It moved the conversation beyond technical barriers to existential concerns, influencing subsequent speakers to address cultural representation, language diversity, and the need for inclusive data sets as matters of survival rather than mere preference.
More of the same is a recipe for disaster. We must acknowledge as a group of people that we have to make a new beginning. If we don’t make a new beginning, just keep doing what we’ve been doing for the last 10, 15 years, we will not get good outcomes… The last 10, 15 years are years of failure and not perpetuate the things that we have been doing.
Speaker
Sharad Sharma
Reason
This is a bold challenge to the entire premise of incremental reform in digital governance. Sharma directly confronts the assumption that existing multilateral approaches can be adapted for AI, arguing instead for fundamental paradigm shifts including techno-legal regulation and public-private innovation models.
Impact
This comment introduced a critical tension into the discussion, challenging the optimistic tone about multilateral cooperation. It forced other speakers to defend or acknowledge limitations in current approaches, and influenced the conversation toward more radical solutions like India’s DPI model and new regulatory frameworks.
Including, bringing more actors around the table… does not only benefit those that are included, but actually those that included. So ultimately, inclusive AI is actually very good businesses because it’s more accessible and brings better benefits.
Speaker
Mariagrazia Squicciarini (UNESCO)
Reason
This comment flips the traditional charity-based framing of inclusion, presenting it instead as a technical and business imperative. It argues that AI systems perform better when they include diverse perspectives and data, making inclusion a quality issue rather than just an equity issue.
Impact
This reframing helped shift the discussion from moral arguments for inclusion to practical ones, making the case more compelling for stakeholders focused on AI performance and business outcomes. It influenced subsequent discussions about data quality and system effectiveness.
I find it very intriguing, what they would call maybe optimism divide. An optimism divide is inversely related to everything what I said now… 70% of the people [in Global South] say, actually, AI may help us… Whereas when you look at the developed countries… 70% are actually fearful that AI may take their jobs.
Speaker
Tomas Lamanauskas (ITU)
Reason
This observation reveals a counterintuitive paradox: those with less access to AI are more optimistic about it, while those with greater access are more fearful. This challenges assumptions about who wants AI development and suggests different regional priorities and perspectives.
Impact
This insight added nuance to the discussion by highlighting that the Global South isn’t just seeking inclusion out of necessity, but out of genuine optimism about AI’s potential. It influenced the moderator’s closing remarks and helped explain why different regions approach AI governance differently.
Innovation has to become an innovation that is built on public goods and private innovation. Because if you don’t have public goods and the innovation is entirely in the realm of private sector, then… the value capture of all this will not be in India.
Speaker
Sharad Sharma
Reason
This comment identifies a fundamental structural issue: that purely private innovation leads to value extraction rather than local value creation. It proposes a hybrid model that combines public infrastructure with private innovation, challenging the dominant Silicon Valley model.
Impact
This concept influenced discussions about data sharing frameworks, public infrastructure for AI, and the need for countries to become producers rather than just consumers of AI technology. It provided a theoretical foundation for several concrete proposals that followed.
We are overwhelmingly consumers of AI technologies that are developed elsewhere. And oftentimes our realities, languages or priorities in mind… Most of these companies don’t bear this in mind… the tools that we adopt are not built for us.
Speaker
Andrea Jacobs (Antigua and Barbuda)
Reason
This comment crystallizes the core problem of technological colonialism in AI, where Global South countries are relegated to passive consumption of technologies designed without their input, leading to systems that may not serve their needs or may even cause harm.
Impact
This stark framing reinforced Minister Lawson’s existential concerns and influenced the discussion toward concrete solutions for moving from consumption to production, including local data ecosystems and stronger participation in global governance frameworks.
Overall assessment
These key comments fundamentally elevated and transformed the discussion from a technical problem-solving session into a deeper examination of power, representation, and systemic change in the global AI ecosystem. Minister Lawson’s existential framing set a tone of urgency that permeated the entire discussion, while Sharma’s call for paradigm change challenged participants to think beyond incremental reforms. Squicciarini’s business case for inclusion and Lamanauskas’s optimism divide observation added crucial nuance that prevented the discussion from becoming purely adversarial. Together, these comments created a rich, multi-layered conversation that moved beyond the typical ‘digital divide’ framing to address fundamental questions about technological sovereignty, cultural survival, and the need for new models of global cooperation in the AI era.
Follow-up questions
How can we develop effective business models to support AI infrastructure funding in the Global South?
Speaker
Cina Lawson
Explanation
Minister Lawson identified the need to think about funding infrastructure and what types of business models are needed to support filling the infrastructure gap, but didn’t provide specific solutions
How do we address the declining interest in math and science education among African children?
Speaker
Cina Lawson
Explanation
This was identified as a major challenge affecting skills development for AI, but no concrete solutions were discussed
How can we build hundreds or tens of data set projects needed for AI relevance in developing countries?
Speaker
Cina Lawson
Explanation
The scale of data set development needed was identified but the practical implementation pathway was not detailed
How do we measure and track compute capability availability across different countries?
Speaker
Audrey Plonk
Explanation
OECD is developing a new methodology but it’s not yet fully public, indicating need for further development and sharing
What are the effective models for talent training within the Global South without needing to send talent abroad?
Speaker
Cina Lawson
Explanation
This was identified as crucial but specific training models and programs were not elaborated upon
How can we create shared infrastructure models for GPUs and data centers for developing countries?
Speaker
Cina Lawson
Explanation
The need for shared infrastructure was identified but the business models and implementation mechanisms require further research
How do we implement techno-legal regulation effectively across different jurisdictions?
Speaker
Sharad Sharma
Explanation
This was presented as essential for AI governance but the practical implementation details across different legal systems need further exploration
How can we operationalize the global training data sharing framework (DEPA) internationally?
Speaker
Sharad Sharma
Explanation
While India is developing this framework, how it can be applied globally for data sharing protocols requires further research
What are the specific mechanisms for including neurodiverse individuals and people with disabilities in AI development?
Speaker
Nupur Chunchunwala
Explanation
This diversity aspect was raised but specific inclusion mechanisms in international AI cooperation were not detailed
How do we move from high-level commitments to real actionable pathways for AI inclusion?
Speaker
Abhishek Agarwal
Explanation
This was identified as a key challenge but the specific mechanisms for translating commitments into action require further development
How can we ensure AI systems protect and empower children while preserving cultural moorings?
Speaker
Sharad Sharma
Explanation
Child safety in AI was identified as a global priority but specific protective mechanisms need further research and development
What are the innovative financing options for AI capacity building in developing countries?
Speaker
Amandeep Singh Gil
Explanation
A UN report on this topic was mentioned as being finalized but the specific financing mechanisms and their implementation need further exploration
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
