Shaping the Future AI Strategies for Jobs and Economic Development

Shaping the Future AI Strategies for Jobs and Economic Development

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on AI-driven strategies for workforce and economic growth, examining how artificial intelligence will impact society, jobs, and industries globally. The panel brought together experts from various sectors including government officials from ASEAN, healthcare professionals, technology infrastructure leaders, and AI company executives to address critical challenges facing both developed and emerging economies.


A central theme emerged around collaboration rather than displacement of human workers. Panelists emphasized that AI should enhance and augment human capabilities rather than completely replace jobs, particularly noting that white-collar positions may face more immediate impact than blue-collar roles. The discussion highlighted significant infrastructure challenges, including the massive energy requirements for AI deployment, with projections suggesting the world will need four times more energy in the next 10-12 years to support data center growth.


Several speakers addressed the unique needs of emerging economies, particularly the 70 million MSMEs in India that employ 230 million people and contribute significantly to GDP and exports. The conversation explored how to make AI accessible and affordable for smaller companies that cannot afford enterprise-level solutions. Infrastructure bottlenecks were identified as major obstacles, including power supply, cooling systems, skilled labor shortages, and the need for both large-scale cloud data centers and edge computing solutions.


Healthcare applications received particular attention, with examples from countries like Guyana demonstrating how telemedicine and AI diagnostics can serve remote populations while maintaining the essential human element in patient care. The panelists concluded that continuous learning and upskilling will be essential for all workers regardless of age, as the rapid pace of technological change demands constant adaptation. The discussion emphasized that successful AI implementation requires collaboration between government, private sector, and educational institutions to ensure inclusive and sustainable economic growth.


Keypoints

Overall Purpose/Goal

This discussion was a comprehensive panel at the India AI Impact Summit focused on “AI-driven strategies for workforce and economic growth.” The goal was to explore how AI can be implemented responsibly across different sectors and regions, particularly addressing the needs of emerging economies, MSMEs (Micro, Small & Medium Enterprises), and the Global South, while ensuring inclusive and sustainable development.


Major Discussion Points

Workforce Transformation and Job Impact: A central theme throughout both panels was whether AI will replace or enhance human jobs. Panelists consistently emphasized “collaboration not displacement” and “enhancement not replacement,” particularly noting that AI’s current impact is more on white-collar jobs through collaborative augmentation rather than full automation. The consensus was that continuous learning and upskilling will be essential for workforce adaptation.


Infrastructure and Energy Challenges: Significant discussion around the massive infrastructure requirements for AI deployment, including the need for 4x more energy in the next 10-12 years and four trillion dollars annually for the next decade. Panelists addressed challenges in power, cooling, data centers, and the potential for edge computing versus cloud-based solutions, with particular emphasis on India’s advantages in renewable energy costs.


Global South Leadership in AI Governance: The second panel focused heavily on how developing nations can be “co-authors” rather than “passive recipients” of AI governance norms. Countries like Maldives, Cambodia, Brazil, and Indonesia shared their national AI strategies, emphasizing the need for context-specific approaches that address local realities, resource constraints, and institutional strengths.


Trust and Safety as Prerequisites for Scale: A major theme was that trust is not an afterthought but a foundational requirement for AI adoption at scale. Panelists discussed the need for transparent, auditable systems with built-in grievance mechanisms, and how financial services have succeeded due to existing trust architectures that can be applied to AI deployment.


Practical Applications and Leapfrogging Opportunities: Discussion of specific use cases where AI can provide immediate value, particularly in healthcare (telemedicine), agriculture, climate adaptation, and urban planning. The concept of “leapfrogging” was emphasized, where developing countries can gain second-mover advantages by adopting cloud-based AI solutions without massive infrastructure investments.


Overall Tone

The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasized opportunities and collaborative solutions. The tone was notably inclusive and solution-oriented, with government officials, private sector leaders, and international organizations working together to address practical implementation challenges rather than dwelling on theoretical concerns. There was a strong sense of urgency balanced with careful consideration of responsible deployment practices.


Speakers

Speakers from the provided list:


Tejpreet S Chopra – Founder and CEO of Industry.AI, moderator of the first panel on AI-driven strategies for workforce and economic growth


Satvinder Singh – Representative from ASEAN, discussing the Digital Economy Framework Agreement (DEFA)


Dr. Mahendra Karpan – Interventional cardiologist and presidential advisor to Guyana, expert in healthcare transformation and telemedicine


Nihar Shah – Director of global cooling program at Lawrence Berkeley National Lab, expert in energy and climate issues related to AI infrastructure


Vinod Jhawar – Representative from Nextra (subsidiary of Airtel), expert in data center infrastructure and AI vertical development


Narendra Singh – MD of RackBank and NeveCloud, expert in cloud computing and space-based data centers


Dipali Khanna – Senior VP and Head of Asia for the Rockefeller Foundation, expert in development and philanthropic approaches to AI


Mohamed Kinaanath – Minister of State for Homeland Security and Technology from the Maldives, government official leading digital transformation


Son Sokeng – Under Secretary of State from Cambodia, government official working on AI readiness and national strategy


Eugenio Vargas Garcia – Tech Ambassador for Brazil, diplomat specializing in technology and international cooperation


Aju Widya Sari – Director of AI and Emerging Technology Ecosystems, Ministry of Communications and Digital Affairs, Indonesia


Kip Wainscott – Executive Director of Global AI Policy from JPMorgan Chase, expert in financial services AI governance


Parag Khanna – Founder and CEO of AlphaGeo, expert in geospatial AI applications for climate and urbanization


Moderator – Host of the second panel “Trusted AI at Scale: A Global South Leadership Dialogue” (advisor to AI Safety Asia)


Audience – Various audience members asking questions, including Harsh Vartan (research fellow background) and CTO at MindEquity.ai/founder of AI Society


H.E. Sokeng – Same as Son Sokeng, referred to with diplomatic title


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion at the India AI Impact Summit brought together government officials, technology leaders, healthcare professionals, and international development experts across two panel sessions to examine AI-driven strategies for workforce transformation and governance in emerging economies. The panels addressed fundamental questions about how artificial intelligence will reshape employment, infrastructure requirements, and governance frameworks, particularly in the Global South.


Workforce Transformation: Collaboration Over Displacement

The first panel revealed nuanced perspectives on AI’s employment impact that move beyond simple replacement narratives. Satvinder Singh from ASEAN emphasized that current AI deployment primarily affects knowledge workers through “collaborative augmentation rather than full automation.” This distinction proved central to reframing workforce discussions from fear-based concerns to strategic planning approaches.


Dr. Mahendra Karpan, speaking as both an interventional cardiologist and presidential advisor to Guyana, illustrated this principle through healthcare applications. He shared an anecdote about treating an 18-year-old snake bite victim from a remote village who had never seen car headlights, demonstrating how AI can bridge critical service gaps in countries facing severe specialist shortages while maintaining essential human elements for patient care and emotional support.


Tejpreet Chopra, founder and CEO of Industry.AI, stressed that rapid technological change demands lifelong adaptation, making traditional career models obsolete. The consensus emerged that continuous learning and upskilling will become essential across all sectors, requiring fundamental changes in educational approaches and the development of entrepreneurial cultures.


Infrastructure Challenges and Energy Requirements

Perhaps the most striking discussion concerned massive infrastructure requirements for AI deployment. Nihar Shah from Lawrence Berkeley National Lab presented data showing that data center growth has tripled over the past decade and is forecast to triple again by 2028. Chopra shared insights from a recent gathering of global CEOs in Abu Dhabi, where he learned that “the world will need four times more energy in the next 10-12 years” to support AI growth, requiring approximately four trillion dollars annually for the next decade.


Shah highlighted critical “blind spots” in AI infrastructure planning, particularly cooling systems and water consumption, which could prove as constraining as energy supply for developing nations. However, the panel also identified significant opportunities for India. Chopra shared his personal experience in renewable energy development: his first solar farm generated revenue at 18 rupees per kilowatt hour eight years ago, compared to 2.20 rupees today, with similar cost reductions in wind energy (from 8.50 rupees to 2 rupees per kilowatt hour).


Vinod Jhawar from Nextra detailed India’s infrastructure advantages, noting that data center construction costs 4-6 million dollars per megawatt in India versus 12 million dollars in markets like the US, Singapore, and Dubai. This cost advantage stems from India’s manufacturing capabilities, with 80-90% of required supply chain components produced domestically.


Digital Public Infrastructure and Regional Cooperation

Singh outlined ASEAN’s Digital Economy Framework Agreement (DEFA), which he described as “the largest regional digital agreement in the world,” connecting 700 million people across 11 countries. Significantly, he noted that data shows least developed countries within ASEAN will benefit most from this connectivity, challenging assumptions about digital transformation benefits.


This “leapfrogging” concept proved central to Global South AI strategy. Rather than replicating massive capital expenditure of developed nations, emerging economies can leverage cloud-based solutions and Digital Public Infrastructure models to achieve similar outcomes at lower costs. India’s approach, with over 50 countries building payment and identity systems on its DPI stack, provides a template for affordable, sovereign AI deployment.


The accessibility question extends to India’s 70 million Micro, Small & Medium Enterprises (MSMEs), which employ 230 million people. Chopra announced the development of what he called the world’s first AI supercomputer for manufacturing, priced at 6.5 lakh rupees, representing efforts to bring enterprise-grade AI capabilities to smaller companies.


Global South as AI Governance Co-Authors

The second panel session, moderated by an AI Safety Asia advisor, explicitly positioned Global South countries as “co-authors” rather than “passive recipients” of AI governance norms. This framing proved crucial in moving beyond imported templates to locally-relevant governance approaches.


Minister Mohamed Kinaanath from the Maldives provided a compelling case study of how small island developing states approach AI as a matter of “institutional resilience” and “survival.” The Maldives’ unique geography—1,200 islands spread across vast ocean distances—creates challenges that AI can uniquely address through their “Maldives 2.0” digital transformation agenda.


Cambodia’s Under Secretary Son Sokeng outlined their UNESCO-supported AI readiness assessment and national strategy development, emphasizing human-centric governance frameworks. Their ambitious goal includes developing 100,000 AI-ready talents over ten years alongside training 10,000 government officials in digital skills.


Brazil’s Tech Ambassador Eugenio Vargas Garcia emphasized the importance of “tech diplomacy” for Global South countries. He provided three practical recommendations: starting with small-scale projects, seeking international partnerships, and engaging actively in global discussions to ensure their voices are heard in international forums.


Trust as Foundational Infrastructure

A fundamental insight emerged that trust must be designed into AI systems from inception rather than retrofitted later. Dipali Khanna from the Rockefeller Foundation articulated this principle: “Trust must be designed from day one, not retrofitted after deployment. Transparency, auditability, grievance redress, open architecture are not compliance burdens. They’re adoption accelerators.”


Kip Wainscott from JPMorgan Chase provided the financial services perspective, noting that their industry’s early AI adoption success stems from existing trust architectures. As one of the world’s largest AI deployers with a $20 billion annual technology budget, JPMorgan Chase demonstrates how robust governance frameworks enable rather than constrain innovation.


Practical Applications and Development Impact

Dr. Parag Khanna from AlphaGeo introduced the concept of “second-mover advantage” for developing countries, arguing that later AI adoption can provide cost savings while delivering superior outcomes. His focus on geospatial AI applications for sustainable urbanization and climate adaptation highlighted practical use cases where AI addresses existential challenges facing the Global South.


Dr. Karpan’s experience in Guyana showcased telemedicine’s potential, with over 200 functional telemedicine sites equipped with Starlink connectivity demonstrating how AI-enabled healthcare can reach previously underserved communities while maintaining human oversight.


Economic Viability and Sustainability Challenges

The discussion revealed significant economic challenges in current AI deployment. Narendra Singh from NeveCloud highlighted a critical problem: “Today you spend $2 and you generate $1 because half of the 50% goes to the AI chip company.” This cost-revenue imbalance threatens AI viability, particularly for developing countries with limited resources.


Singh’s call for indigenous AI chip development addresses technological sovereignty concerns, while his mention of space-based data center initiatives represents innovative approaches to infrastructure challenges. The sustainability dimension emerged as crucial, with Jhawar committing to net-zero targets by 2030-2032 and 100% renewable energy sourcing for data centers.


Unresolved Challenges and Future Directions

Despite optimistic projections, several critical challenges remain. The talent shortage for AI infrastructure maintenance emerged as a significant bottleneck, with Chopra mentioning instances of substantial downtime due to lack of qualified personnel. The economic viability question persists, with current AI costs often exceeding returns, and dependence on expensive AI chips from limited suppliers creates vulnerability for developing countries seeking AI sovereignty.


The balance between rapid AI adoption and comprehensive governance frameworks remains contentious, reflecting broader questions about risk tolerance and institutional capacity in different contexts.


Strategic Implications and Conclusions

The discussions revealed that successful AI deployment in the Global South requires fundamentally different approaches from developed countries. Rather than competing on massive infrastructure investment, emerging economies can leverage second-mover advantages through cloud-based solutions, regional cooperation frameworks like DEFA, and DPI models.


The emphasis on collaboration over displacement provides a framework for managing workforce transitions while capturing AI benefits. The recognition that trust must be designed into systems from inception offers guidance for governance approaches that enable innovation rather than constrain it.


Most significantly, the positioning of Global South countries as co-authors of AI governance norms represents a shift from technology recipient to technology leader. The practical examples from India, the Maldives, Cambodia, Brazil, and other nations demonstrate that innovative approaches to AI deployment can emerge from resource constraints and unique local contexts, potentially offering more sustainable and equitable models than capital-intensive approaches of developed economies.


The path forward requires continued focus on infrastructure development, particularly renewable energy integration, talent development through continuous learning programs, and international cooperation through regional frameworks. Success in these efforts will determine whether AI becomes a tool for inclusive development or exacerbates existing inequalities within and between nations.


Session transcriptComplete transcript of the session
Tejpreet S Chopra

Hi, good morning everybody. I’ve got an incredible panel here this morning. The topic that we have is, I think, the most important topic at the summit. I think everywhere I’ve spoken or everywhere I’ve been, it all revolves around this critical topic around AI -driven strategies for workforce and economic growth. And I think the reason this topic is super important is the fact that if you are a government official anywhere in the world, I think this is their biggest concern, is that how is AI going to impact society? How is AI going to impact workforce? How is AI going to impact industries? So that’s going to be the most, you know, it’s the most important topic.

So I appreciate everybody who’s out here. My name is Tej Trikot Chopra, and I’m the founder and CEO of Industry .AI. So we are an AI company that focuses on driving productivity. productivity. passionate for me because I live and breathe this every day because we are trying to really do this is that how do you create the digital workforce or how do you empower the workforce across industries. A quick introduction of my colleagues on the call. We have Mr. Sathinder Singh from the ASEAN. Mr. Narendra Singh from Neve Cloud should be joining us any minute. Mr. Vinod Javar who is really the key in Nextra which is part of AIDA. Dr. Nihar Shah who is one of the best in the healthcare space and Dr.

Mahendra Karpan who is a presidential advisor to Guyana. Welcome everybody. Just one other key point. Just to put it in context. In India we have 70 million MSMEs. These MSMEs employ 230 million people. The MSME market in India produces 30 % of India’s GDP and 50 % of exports. The other big critical thing is how do we bring AI for all, how do we bring AI for all these companies that can’t afford what normally large companies do? And that’s the big challenge in front of us. And that’s what we’re going to talk about today. So in order to really kick this off, what I’d like to do is first really talk about three critical elements in today’s discussion, is that how do we redesign our workforce strategies given this new technology that’s coming up?

How do we build the digital and compute infrastructure? And I’ll request Vinod to talk about that. And how do we really ensure that economic growth driven by AI remains inclusive, responsible, and sustainable? So with that, I’m going to request Satvinder, if you don’t mind kicking it off, and it would be good to understand from your perspective, how is the Digital Economy Framework Agreement going to help governments around the world navigate the opportunities that exist? Over to you. Thank you.

Satvinder Singh

Thank you, Mr. Chopra. Very good afternoon to all of you. Great to be here with all of you. I think all of us are enjoying this momentous impact event, and it’s a great place to be here sharing ideas. And I’m here specifically Mr. Chopra, if I don’t mind, I’m giving the perspective of the ASEAN some of you may not know ASEAN is next door to India today we are the 5th largest economic bloc, 700 million people, most of it middle and upper middle income economies who are part of ASEAN and with India of course we are deeply connected, we have a free trade agreement and we also have a very strong trade economic ties with India and we have a lot of cooperation going on with India including in the area of digital connectivity Mr.

Chopra talked about the digital economic framework agreement, let me just update you what is that, in short we call it DEFA DEFA is a digital agreement we are now negotiating in the midst of completing negotiations by March of this year last two years we have been negotiating it is the largest regional digital agreement in the world the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding So we are actually negotiating with the 11 countries of ASEAN to come on board 700 million people to be digitally interconnected, interoperable So that we can do business better and so that we can grow our economies better Now the essence of DEFA came about post -COVID I think in COVID, like in India, in ASEAN too, it really changed us I think while COVID was not good for anyone, but COVID also had positive unintended consequence We saw the greatest transformation taking place in the way we live, work and play And that interpreted clearly into growth that took place post -COVID I think the leaders in my region saw the prospects And they also saw the numbers where huge chunk of economic growth is driven by digital And therefore there is a no regret move now to move the entire region into digital connectivity And that’s where the DEFA comes through I think the interesting thing in ASEAN I mean, well, India is one country in ASEAN Like I said, there are 11 countries.

We have LDCs there, least developed countries like Laos, Cambodia, Myanmar, and now Timor -Leste just joined us. And then we have advanced economies like Singapore, Malaysia, Thailand, Indonesia, who is like a middle -income economy. So it’s a mixed bag of economies, but the momentum of getting all of them together to do this, we were able to move the agenda because we were able to show very quickly through data that the biggest beneficiaries actually of the DEFA is not even advanced economies like Singapore, because they are already there, digitally connected, but actually are the LDCs. We were able to show that the impact of DEFA will be greatest in terms of jobs, prospects, economic growth, because they are really economies which are least developed, but they are going to be more developed.

We are moving into the latest of all kind of connectivity at the lowest cost. And they will be the ones who will be able to benefit on a per capita basis in a maximum way. So that is how we were able to get 700 million people from 11 countries. sitting on a common agenda of being integrated because you were able to show them the money. If you don’t show them the money, nobody is going to jump in and do any such agreement. Money here means jobs, economic growth, deeper depth in terms of growth of the people and communities. So DEFA, for example, already ASEAN is a very vibrant digital economy. Roughly it’s about 300 billion today and we are going to be moving to a trillion dollars in size by 2030 in the next couple of years.

But with DEFA, the numbers are showing that the region is going to double the size of digital economy. So I think this is where we are in terms of our ability to come together to be able to do business. And our idea is, of course, not to stop in ASEAN. The idea is that once DEFA is going to be in place, we want to be connecting to India. Economy to economy, I think this will really be fantastic. I think we can stop looking over the shoulder. I mean, basically global South India, Southeast Asia, there’s plenty of markets. demographics on our side. In fact, even the affinity of our people in wanting to embrace technology is on our side.

In fact, some of the studies that are showing that actually it’s economies like Southeast Asia and ASEAN as well as India is where people are seeing the translation of the use of AI in the most profitable way. The data is showing that it’s not in the West. It’s actually in our region where businesses are beginning to deploy small AI into their day -to -day business and making a big impact on productivity, on growth, and also in relevance. I’ll stop there. Maybe

Tejpreet S Chopra

Satyendra, you’re absolutely right because I think in our part of the world, I tell everybody that India is trying to lead and be the bridge between the advanced economies and the emerging economies. But I think the dynamics of technologies needed for our kind of part of the world is very very different from the West. So I think if we can get a good price point to provide these technologies, that will be great. Dr. Carpin, you’re an interventional cardiologist and you’re advising a lot of governments around the world. It would be great to get your perspective in terms of how you see AI and its impact in terms of transforming public health care. Now Dr. Carpin I was at a discussion two days ago with Vinod Khosla and he always says some really provocative things and one of the things he said was that in a few years from now AI, we won’t need doctors in the world and I know he said we won’t even need doctor surgeons in the world so you don’t need you’re the actual real person who does all this stuff so it would be good to get your perspective.

Dr. Mahendra Karpan

Thank you very much for having me here and I bring you greetings from our President Dr. Mohamed Irfan Ali and Vice President Dr. Bayard Jagdeo We are from a small country located in South America and we are here to help you just a population of 850 ,000 people. I believe on the way here I might have encountered 850 ,000 people on the road. So you can imagine the scale that we are dealing with. What is transformative about Guyana at this time in our history is that sometime 2015 we discovered oil offshore. And you know everything that comes with that transformative discovery. The oil and gas industry is now booming. And we are trying to learn all of the lessons from states that have walked this path before.

Those that have relied heavily only on oil and gas they have encountered tremendous difficulties that we are hoping to avoid. So one of the things, or a couple of the things that we are using these resources to do is to help us with our health care, our agriculture, and our digital transformation in the public service. One of the other important things about Guyana is that we have the majority of our population living on the coastal area, and most of the rest of the country is forest. So we actually are pioneers in selling carbon credits to the world. The sad and vulnerable part of this, however, is that the coastal area is on the sea level.

So using AI, predictive models, all of those things, it’s a survival tool for us, not just now, but for future generations. In recent times, we have, been fortunate to have visionary leadership to take us in the direction we’d like to go. and we have in our country several remote villages of indigenous populations and I’ll share an example, a personal example when I was more in hospital practice. There was an 18 -year -old boy who had to be flown out from an interior village by the military helicopter after a snake bite and when he came to the city it was the first time that he saw headlights on a car. That’s where we still are in some places.

So we have been able in recent times to use our resources to establish telemedicine in particular areas. We have now over 200 functional sites that can actually serve these remote communities. They can do simple things like this. EKGs, x -rays, blood pressure, blood sugar, all of the common things and respond. to trauma, etc. So a healthcare worker, not necessarily a doctor, a community health worker, somebody indigenous to that area, can assess these patients, they go on video conferencing and all 200 of these locations actually have Starlink or we’re trying to implement that now, so that they have connectivity. So the specialists on the coast and in larger centres actually can give real -time diagnosis, real -time advice.

I myself and the cardiac unit, my on -call team always will be able to review an EKG. Like in India, I suppose, our number one cause of mortality is still cardiovascular disease. Heart attack is a huge problem in our population. Historically, some of you may be familiar with the facts that most of our population at one time were indentured immigrants that left India. and the majority never returned. And they built homes and created generations of descendants from then in Guyana, Trinidad and Tobago, Suriname, in that entire region. So whatever is plaguing you in India from a healthcare perspective is the same thing that was transferred. Because we maintain the same lifestyle, we have the same foods, the same likes, the same dislikes, the same genetic predispositions.

So in our context, what we have been doing is to start at the basic primary level, because that’s where we are. We’re not yet at the Singapore level and the others, but we’re hoping to get there in a very rapid leapfrog type of strategy. So we’re using AI at this time to do primary healthcare, for inventory management, for surveillance. For surveillance. And we’re moving into areas like agriculture for soil management, food production, et cetera, and to help us in all aspects. So for those of you who are looking for opportunities where there are challenges, there’s always opportunities. So Guyana presents to you tremendous opportunity for investment, for development, and for long -term, multi -generational, sustainable involvement in our country.

And I am sure, and I bring you this message on behalf of the president, we welcome investors to Guyana.

Tejpreet S Chopra

Dr. Carpin, thanks so much for that. I completely agree because I think the world is going to face exactly the same challenges, whether it’s in health care, whether it’s in agriculture, and I think there’s a lot of cross -sharing that we can actually learn from. Niyar, I’m going to pull you in, right? So just for everybody’s benefit, Niyar is with the Lawrence Berkeley National Labs, which is really one of the leading public research institutes. It’s in the world. But Niyar, I just want to share with you about four or five weeks ago. there was a majlis in Adnok in Abu Dhabi and they had a hundred CEOs in a room on a Sunday and everybody in the world showed up and there were four groups of people every major CEO of every major oil and gas company in the world the CEOs of every major energy utility in the world the CEOs of every AI company in the world and the CEOs of every large capital provider in the world and I was trying to figure out myself what’s the connect and at the end of the day what came out was that the world will need four times more energy in the next 10 -12 years to support the growth of data centers and all the other things that are going to happen and that’s going to require four trillion dollars every year for the next 10 years so Nir with those numbers would love to get your perspective from a technology perspective and how should the world react to that kind of growth that’s needed thank you

Nihar Shah

yeah so as mentioned my name is Nihar Shahab and I work at Lawrence Berkeley National Lab it’s one of the 17 Department of Energy National Labs If you also are Oppenheimer, you might know where the National Labs came from. And, of course, we have a very distinguished history with a lot of Nobel Prizes, so I won’t bore you with all that. But I’m very grateful, first of all, to CII for this opportunity to speak. And with respect to the question, obviously energy is one of the things that I think in, you know, I go to bed every night, I wake up every morning thinking about energy, being one of the energy labs of the United States.

Now, one of the other things that is also probably not as well -known, I direct the global cooling program at Berkeley Lab. And another blind spot with respect to kind of, you know, you mentioned energy, you mentioned the huge growth, you mentioned, you know, the huge investment needed. Another thing that’s needed is going to be cooling. And that’s another blind spot that I think we don’t really pay attention to. So that gathering of CEOs, I hope that there’s also a gathering of HVAC and data centers and other CEOs. and then in addition I think one more thing that we would probably need to think through in countries like India is the water consumption so you think about these kinds of things you also so we need to really think about this in a holistic sense and then you know in the bigger picture thing I think some of these things of course AI we are at the intersection of so many different things but if somebody tells you that they know exactly what’s going to happen in three years or five years or seven years they’re selling you something so you might want to take a second look at that.

I’ll say a couple of other things right related to what you mentioned Vinod Khosla just a month ago I was in Silicon Valley Berkeley Lab is based in Silicon Valley and Mr. Khosla was giving a keynote there and he said as usual he said some very provocative things. One of the things he said was by 2030 everything that needs human expertise will be free or nearly free. Second thing he said was everything that needs you know labor is going to be and this is coming to our topic here is going to be also very nearly free. The thing that I disagree with Mr. Khosla about is that, again, I mentioned the energy blind spot and the cooling blind spot.

So really, I think some of these things are going to be infrastructure bottlenecks, which I think some of our co -panelists are going to be able to address. And then also, I think, along with that, there’s probably also going to be a talent bottleneck. And when I say talent bottleneck, I don’t mean talent across the board. I think it’s going to be particular kinds of talent that we’re going to need. And just today, you might have heard the U .S. and India signed or India formally joined this PAC -Silica initiative by the United States. PAC -Silica is an initiative about the whole AI supply chain. So now you’re talking about not just, you know, kind of compute and not just, you know, the infrastructure, but you’re also talking about the whole supply chain that will allow that to happen.

And that’s an initiative that the U .S. government has started. So, you know, there’s a range of different things we could talk about. The workforce dimension, of course, is super important. And, of course, I can. Come back to any of those things. I’ll mention one last thing. Berkeley Lab, the Energy Act of 2020. requires Berkeley Lab to essentially report to the U .S. Congress on data center growth. And they found that over the last decade, data center growth has tripled. So, again, some of these numbers bear out even if you look at history. And the forecast is that by 2028, triple again. So, you know, these things are, again, not very well known. But I do think that, you know, that these blind spots need to be addressed by all of us and all of you all.

And we’re at a very interesting point with the, you know, I would say industrial revolution. So let’s see what

Tejpreet S Chopra

Thanks for that, Nir. I think you’re absolutely right. I think the people are underestimating the challenges of developing all this infrastructure, whether it’s in terms of cooling, whether it’s in terms of power, whether it’s in terms of communication, fiber optics. So I think that’s going to be a huge challenge. And with that, I’m going to turn it over to Vinod. Vinod’s with Nextra. They’re building some of the largest data center networks in India. But just before this panel, I was actually talking to Vinod because I think there are going to be two parts of the world. There are going to be large cloud data centers, which Vinod is building, but I also think there’s going to be another parallel world that’s going to be on the edge.

We at Industry .ai this week launched the world’s first AI supercomputer for manufacturing, which can go on every factory floor of the world, especially at a price point for 70 million MSMEs to transform productivity. So two things, Vinod. One would be good to get your perspective on how the world’s going to pan out of cloud versus edge, number one. And number two, all the challenges or bottlenecks that Nir was talking about, whether it’s in terms of cooling, capital, technology, skill labor, would be good to get your perspective.

Vinod Jhawar

Sure. Thank you. Thank you very much. I represent Nextra. It’s a subsidy of Airtel. We are in this business of building infrastructure for data centers. So that’s it. That’s our bread and butter from Nextra point. So we’ve been doing this for the last 20 years. We’ve seen the evolution from a normal. server room racks to small enterprise customers to now hyperscalers. Now, we’ve got the expert also over here. We’ve got now the new, what do you say, elephant in the room called the AI requirement of data center. So much is the demand now which is coming through to build large infrastructure for data center that Nextra has decided to carve out a separate vertical called AI VC on that.

So that’s the vertical which I represent. We are here to develop large scale gigawatt kind of campuses to that to cater to the fast requirements of some of our customers to grow primarily in the Indian subcontinent areas here on that. Yeah, the right is said that the challenges are there. Power is the challenge. Land is a challenge and getting the right kind of skill set still remains a challenge. We come from 20 years of experience, so we have understood the ways to work on this, on that. So being one of the pioneers in the home ground industry in the data center here, so we have been doing that. Few of the challenges have been pushing us to go beyond certain areas, look at new areas to build data centers.

Some of them are very close to the coastal area so that it can also accommodate cable landing stations for us, so that takes care of a lot of data required requirements. Plus we are also putting sites which are close to national grids now. There were places where we used to source voltage at 33 kV, now we are looking at 700 kV volts and all. So this is the thing, thoughts, which has now evolved and all, and it requires a separate thought process and the… Obviously the large amount of capital is required to get into that. So we are in this and we are well prepared. The demand obviously is quite high. The expansions of Nextra is quite aggressive also.

And we got something going on in the south. We thought something the best going forward on that. So your second question of how do you do the power and sustainably portion of it? Obviously most of the power we are going to source from renewable energy. That’s the key strength in the Indian regions here. Luckily for some of the good policies which have been put across a few decades back by the government, we have got plenty of renewable energy generators here. The government is pushing for upgrading the infrastructure to evacuate this energy also. So once we are at this high voltage, we are also connected to the central grid. So it makes it very, very reliable for us.

We are aiming to be what we said net zero by 30, 20, 30, 32, something is what. And there is a big pool of renewable energy for us to tap into that. We are at present, we have contracted close to 400 odd megawatt of renewable energy. So no longer we are looking at just 50 % resource for energy. That percentage is going almost close to almost 100 % now. This is how the whole sustainably portion of data center. And India and Nextra are well positioned to tap into this. I think that’s the interest level in trying to do a green data center evolves for us on that. The other challenge which has been told about how do you do the skill set. Yeah, skill set is a challenge.

Probably it requires a lot of debate on that. It’s something which needs. To be handled both at the fundamental level at the schooling level. and at the university level and right at the immediate level of training the existing engineers to adapt. So by the time the next generation come in, we probably would have missed the bus. So there are three, four approaches we are looking at how we can build an immediate kind of skill upgrade to make them suitable to develop the data center on that. So this is some of the things which we are doing, and I think we are, as an extra, very, very well positioned to meet the demands of whatever the customer is looking at.

Tejpreet S Chopra

Thanks for that, Varun. I think one of the things that came out of that session in Abu Dhabi was the fact that the world that’s going to win the AI arms war is the country that has the cheapest energy. And I really do believe that in India we have an incredible opportunity. I come from a renewable space. My first solar farm eight years ago, my revenue was 18 rupees a kilowatt hour. Today we get 2 rupees 20. My first wind farm was 8 rupees 50. Today we are in 2 rupees again. So I think we have an incredible opportunity in India to really win this AI arms war because the cost of producing energy is quite cheap. So, Narendra, first of all, welcome.

Narendra is the MD of RackBank and NeveCloud. Narendra, you’ve heard all the challenges in terms of the cloud. It would be good to get your perspective in terms of, one, cost of compute in India. How do you really make it affordable for everybody? And two is, how do you ensure adoption across the country?

Narendra Singh

kilometers away from Earth. We partner with Agni Cool, which is a space tech company, and the space ecosystem has evolved in the country from the last seven years. And the government has given a lot of open up space for everyone, for the private player. The first mission we are sending before the end of this year, and we believe that this is for critical workload which can protect the borders, unmanned vehicles, and all those things. So we started exploring beyond Earth and that’s what it is needed. And we can lead as India because the ecosystem, today look at the cost of building data center in India is 4, 5, 6 million dollars per megawatt versus 12 million dollars in US, in Singapore and Dubai.

Any market you go, you get a cost of 12 million dollars. Why? Because the 80 to 90 percent product which required in supply chain is manufactured in India. And we have to strengthen that. As government announced the 200 billion, I believe this is only for data center infrastructure, not for chips. So chips cost is on top of it. It’s like 5x or 10x. It depends on what chips you are using. so opportunity is huge, it’s a trillion dollar opportunity for the country thank you

Tejpreet S Chopra

I met Narendra about 2021 at the JW Marriott Hotel in Mumbai and that time he was still putting together this whole strategy and I was thinking to myself in 2021 what’s going to happen about data centres and now he’s talking about data centres in space so it’s good to see the kind of progress that’s happening before I go, I have lots of questions to ask but any questions from the audience that they want to ask go ahead go ahead

Audience

Sir, I am Harsh Vartan basically from HDI industry but before that I was working as a research fellow in CSIRC so my question to Mr. Shah is we have seen hydrogen fuel cells being used at experimental level in railways and buses but it has not been implemented at a large scale neither in India nor abroad so what are the… Thanks.

Nihar Shah

Yeah, I have many colleagues at Berkeley Lab. Actually, they have been collaborating with India’s National Hydrogen Mission. So, you know, I think, and thanks for the question. You know, when you come to fuel cells, I think there are a few bottlenecks. You know, some of them have to do with also just even having the hydrogen infrastructure in the country, right? Right. So I’m not necessarily the right person to address like all of these issues in terms of why hydrogen fuels have not taken off. But I do think that some of these things are still, you know, kind of an R &D challenge. And I think many of these governments are looking at hydrogen to see whether or not you can actually, you know, eventually do the R &D to deploy it.

And there is collaboration going on. So, you know, stay tuned. I think it will obviously India is also doing a lot on that and other countries also.

Narendra Singh

I can add to this that. The bottleneck as an operator is the cost should not be higher than what we are getting today from the grid. innovation should like lower down the cost then the adoption will happen rapidly so that’s what we think and that’s why I believe the adoption is not happening because people don’t want to pay premium for that and I think India can take the lead in that in terms of cost adoption and price points the supercomputer for manufacturing we are seeing 6 .5 lakhs so I think that’s the kind of speed at which we are going to change the way things are going

Tejpreet S Chopra

so Tindra I want to pull you in and the ultimate question that everybody is asking impact on jobs at the ASEAN how are you thinking about it because huge concern for governments is AI going to replace jobs is it going to enhance jobs so it would be good to get your perspective and what you are going to say out here is going to drive policy all over the world

Satvinder Singh

so I am also going to say it perspectively from also how data is being collected already on impact on jobs and actually I have taken this from the studies that actually were done by Entropic Entropic it’s a massive study on AI jobs and security and one thing is clear. I think right now, while there’s quite a major hype on AI, but when you actually study the impact that it has globally and even in Southeast Asia, in ASEAN, it’s really impacting certain segments of the economy. I think the biggest impact is actually more on white -collar jobs rather than blue -collar jobs. And it’s true because a lot of it, I think even in the white -collar jobs, a lot of it has got to do with collaborative augmentation rather than full automation and handing over to the AI to do everything.

So I think that’s at this stage where we are in terms of the technologies on AI that we have and how we’re deploying them. Of course, when you are watching Elon Musk and you watch all these technologies and what’s to come in two, three years’ time, they are saying, now this is going to move from collaboration to totally replacing the human factor. In fact, the takeover part is, I think, that’s what… scares most societies. And I must tell you this is now becoming front and centre of conversations in government, in policy makers. I’m actually quite certain that the governments are not going to hand over this ability of replacements of all important jobs at the high echelons of society to the machine.

That I can assure you is not going to happen. There will be a lot of effort and conversations going on and it’s happening in closed doors where policy will have to come in to determine what can or cannot happen. And those barometers are going to be there and I think that’s where you see this impact event. You saw the largest conglomerate of decision makers from the private sector sitting with governments in this one location. You can see that momentum is there. There will have to be an ability for us to differentiate and also collaborate with what the change is going to come. and otherwise I think you’re going to see societies breaking up, the contract of governments with people is going to break up, people won’t have jobs, and if we are saying that the impact is not so much on the blue collar I think then a lot of the farming community is probably sighing with relief but I think we all know that in the cities where there are millions of people, I think this is where it’s going to be quite critical for us to get this contract properly sorted out I think in the coming years you’re going to see a lot of ethical rules regulations set up in order to ensure that actually whatever change that we embrace coming from the latest of AI, it has to improve, not take away the quality of life I think collaboration is going to be the name of the game not displacement of the human the use of people and population and I think that is something in conversation it’s not something that in this room we can decide, but clearly you can see that the momentum is here for that kind of difficult conversations to take place

Tejpreet S Chopra

you’re right and I like what you just said and first of all I’m hearing that one collaboration not displacement because the word I’ve been using is enhancement not displacement but I think it’s going to be all of the above so I think you’re absolutely right Dr. Karpen it would be good to get your perspective especially of healthcare you talked about telemedicine right technically I guess a doctor in the United States or India could be providing advice to somebody sitting in Guyana so how are you seeing this whole world panning out in terms of the impact on healthcare jobs

Dr. Mahendra Karpan

thank you so I’m glad that you mentioned the collaboration of these services yes indeed we have in the telemedicine space we have doctors from India we have doctors from New York the Apollo hospitals here the Northwell group in New York they collaborate they are able to help us with patients in terms of displacement of human capital or human skill set though I think for most of the countries like ours, we’re starting out at a severe deficit. There is not a surplus of radiologists. There’s not a surplus of cancer diagnostic technicians. All of these skill sets are extremely limited. So AI actually comes in to help us with diagnosis, accuracy, speed of diagnosis, as well as the economic aspect of achieving all of those outcomes.

But I tell you one thing, as a physician, that we are not too concerned for some things. In the emergency room, when there’s a child who can’t breathe from asthma and they’re scared parents, an AI can make an accurate diagnosis. They can tell you exactly what to give, what mixtures to nebulize, but to comfort and reassure those parents. that’s a human function at difficult stages of life when you’re facing terminal situations end stage of cancer you want somebody with warmth to hold your hand that cannot and can never be replaced by AI so all of this we have to bear in mind the complementary aspect of this new era that we are entering into we rely on the AI to give us the accuracy of the diagnosis in fact in Guyana we just purchased software to help us with CT scan interpretations and the world is going towards more imaging earlier diagnosis and that will be used effectively to reduce cost to have better access to specialists there was a time when we could not even contemplate getting for the right treatment for the right treatment and we were just waiting and we were just waiting the top guys from Apollo to give us an opinion, or the top guys from Mount Sinai.

Now they’re willing and they’re able to, despite the time difference. Actually, it’s like quarter to five in my morning time, so if I’m a little sleepy, please forgive me. But this is how we’re using it. But that human touch, I don’t believe it will be replaced at all.

Tejpreet S Chopra

Glad to hear that. And also, I sometimes think that when you go and search something on the internet, you get hallucinations, you get false answers, so the last thing I want is a doctor to be searching and getting the wrong answer and suggesting the wrong medicine. Any other questions? Otherwise, go ahead.

Audience

Hello, I’m the CTO at MindEquity .ai and I’m also the founder of AI Society. I have two questions, actually. My first question is that if I am starting an AI pilot company, and to have a full -time impact, what is the biggest challenge? technical barriers that I think have value in both that.

Tejpreet S Chopra

Do you want to take that? Do you want to go ahead?

Narendra Singh

So scale and AI. Today you spend $2 and you generate $1 because half of the 50 % goes to the AI chip company and this problem can be only solved through enabling indigenous AI chips which has a better performance and the lower cost and maybe in the big guys who is enabling the entire ecosystem, they have to reduce the cost. That’s the best way because once you build the agents, we have billion users. It’s the largest market in the world. Scaling, when you do the scale, people will not pay for the value. $20 is not enough or $10 is not enough for the subscription but your cost is higher so I think this will take some time. The new chips is coming and that’s where the job questions related to that, if I answer this.

The job is the AI voice is fairly usage happen in the country and we are losing even government is adopting AI. They are signing all the MU with foundation companies. What I believe that they should not remove the call center. They should come up with a policy because AI is costing 7 rupees per call versus call center is only 1 rupees per call. So adopting AI in this and you are firing millions of jobs and what happened after that. So I think in some area government has to restrict the AI and those are the challenges because we have to wait for some time to figure out what these people are going to do next. Upskill their thing.

Tejpreet S Chopra

Thank you. Can I give somebody else a chance? Thanks so much. Somebody at the back. Sorry. Just give me one minute. Go ahead.

Audience

Question to Mr. Sathinder. As you are taking the topic of job in securities, say upskilling or reskilling, how can these strategies can help preserve the jobs and still the human in the loop and in the end, the relationship between these two, more better, more human friendly.

Satvinder Singh

So clearly the efforts in most countries in the world is to really start upskilling their populations. It’s really beginning. It’s starting from schools but it’s going out to the workforce because the workforce that’s actually today actively under siege with all this AI implementation. And obviously there are countries who can afford the upskilling. The more developed countries are quite generous in terms of capacity building and coming out with programs even empowering employers and workers to help themselves in order to do the upskilling. But I’m also worried sometimes, what are they upskilling with? what they sometimes upskill with may not be enough in two years time so I think this upskilling is going to be really an upskill task for all of us so ultimately I think you have to continue learning upskilling is the word but continuous learning to adapt is going to be the name of the game but you know it’s going to be harder for my generation and some of us in the panel, not all of you and for some of us in the audience but not for some of the younger ones and some of you who are just starting work right now and when I talk to some of them, the younger people they are less worried about this they are less worried because they have already grown up in the universe where things are moving in that speed and they are not talking about lifelong careers they are talking about lifelong skills that they will keep adapting to the new change so I think that is the name of the game to survive

Tejpreet S Chopra

go ahead

Audience

basically you told that the solar revolution came in India When it was A to be 30 kilowatt per hour, and now it’s to be 20. There were little, little catalysts which were involved to boost up the solar revolution in India. So, one of the revolution was subsidy and the information to people. Basically, I am in favor that AI should come and boost up because it won’t affect the jobs. It will fit the people at the particular level where they should be and it will increase the literacy. And are we also planning to give subsidy on various AI projects which are into development or subsidy level because it will play a catalyst to boost up the AI model as solar revolution came in India.

Tejpreet S Chopra

I think the government is doing a lot already. The government has already given 10 ,300 crores for the India AI mission for sovereign AI. They are giving GPUs available at 65 rupees per month. Yes. Right, per month. Not per year. Per hour. Per hour. So, we are already the cheapest in the world. So, the government has a whole slew of incentives and subsidies that they have announced and they keep on adding more. I don’t know if anybody else wants to add.

Narendra Singh

so there are quite a few no no it’s public it’s all public the event is all about India if you forget our global missions so the India is bring us together and build this entire ecosystem two years back Sam Altman was in India and talking about you can’t do this this is not part of now look at 12 foundation models country has launched so this is only possible when you democratize the AI access or GPU access to the innovators like you so that’s what government has already done you just request you will get the allocation of the GPU with one of the providers like us and you can get half of the prices paid by government by the way it’s not subsidy they are paying full price and subsidizing the end user like you the innovators thanks

Satvinder Singh

So, I think when it comes to the higher education, the challenge is worldwide. So, I can tell you in dinner conversations of some of the most established people I’ve sat with, with their children who are all in higher education, you can see that the value system is changing. The focus today of some of the most influential people who can make a difference is to actually encourage their kids to become more enterprising. So, I think the culture of being enterprising has to be given prioritization in order for the ability for us to adapt to what’s coming. And I think if we change that, if we create that and inculcate that in the universities even more, and bring it up front, I think that that will be the way we can overcome some of the challenges you face.

At least, I’m trying to address this first part.

Narendra Singh

Yeah, I think I can adopt on top of it. Set up AI like you can get the GPUs from there. But now, you don’t need to learn code today. You can code through AI, right? So you can build this. We have a billion users, billion problems. So you can solve those problems. You can make them entrepreneurial, like solve one problem at home or a college related or school related. And that’s what we also encourage people should come. Our students should come and go to industry visit. Because if they see that, they will do that. And the physical world has a lot of opportunity than the digital world. Because digital world is now concentrated with, imagine how many apps you guys are using in your phone.

Now this will go to one app, which is open AI or cloud. So the money is going to one company. And that’s dangerous than anything else.

Nihar Shah

I’ll just add on the energy part, right? So you heard also, Narendra talk about putting data centers in space, right? So. And free cooling, free energy. So that’s one part of it. But the other part I think that’s interesting about this whole energy question is that you really, I think, with AI are able to imagine potential. I mean, we don’t know what we don’t know and we don’t know what we know even, right? So one of the examples I’ll give you on that is, you know, with respect to like designing better chips, right? It’s like when they gave the, you know, Google DeepMind, when they gave the problem to AI to actually design better chips, they found a 30 % improvement in the performance of chips because AI was able to design better chips.

And so you can think about AI designing better data centers, AI designing, you know, many different parts of the whole chain. And we don’t know all of the different things that AI, you know, even mathematical computational efficiency, these kinds of things. So there are many different domains that we haven’t even touched that potentially can also have a transformative impact. And so this is a very, like I said, a very exciting time in our lives where we get to really see what the impact is going to be. Thanks.

Tejpreet S Chopra

Vin, do you want to add? Do you want to add something? No, okay.

Vinod Jhawar

Just to add to that, what we expect AI tools probably to do is to give an opportunity to grassroots. so when these tools are being employed and they learn through that and now a lot of language barriers are also being broken up so English is no longer a barrier with all the AI tools so you will have some set of people who will be using that and qualification is not a driver for that so this is what AI will do and we will see a trend where you will have a lot of blue collar upskilling by themselves no need to link it to degrees it is the self learn module assisted by AI tools which will make them competent for the market they could be either a specialist advising or they could be entrepreneurs or they could be a coach on that need not be sitting in a desk and doing something which is written there I think this is how we feel the education system also will change with the tools being available there

Dr. Mahendra Karpan

Thank you. So obviously we are the newest, particularly in consideration of this room. We’re new to the AI game, but one of the things that we’ve been able to do in Guyana is to create a digital school at the primary level. And it started working. In fact, it’s now being requested by other countries in the region. That’s the Caribbean region. And part of our objective is to use this to get kids kind of hooked on technology, AI type of education. Hopefully it can be tailored to each individual child to identify strengths, weaknesses, to strengthen the areas that are weak, whether it’s literacy, numeracy, anything, and tailored to that particular child. And so that their interests can be peaked, their interests can be exploited and expanded.

and ultimately they may be able to condense eight hours of school time, maybe three hours, and then they can go outside and play like normal kids, the way we used to play kids as kids. So the digital schooling, the digital era, is not necessarily to take all their time behind a computer and a desk, but to give them more time and more freedom and to create a habit so that that could follow them, not just at the primary level, but when they get to university and all the way up to adulthood.

Tejpreet S Chopra

Thanks very much. I know we only have, we’re out of time right now, and I’ll quickly wrap this up in about 45 seconds just so that everybody, I think it’s been an incredible discussion. Boy, it’s going two times now, so I really have to wrap it up. But I think I just want to quickly wrap up. Six or seven key takeaways from today’s discussion that we all had. The first one is the fact that I think jobs are going to be key. It’s going to be collaboration, not replacement. I think the way we do our job, that’s critical. I think Dr. Carpin talked about agriculture and health and medicine. I think there’s going to be a huge transformation.

But the good thing is humans want touch. So that’s good. But, you know, there will be a lot of revolution in terms of telemedicine, et cetera. Nihar, you talked about cooling and bottlenecks. I think those are things that we all have to think about in our countries. How do we provide the infrastructure for cooling, et cetera, and something that you all can work on and let us know how to make it more efficient. I think that’s great. You talked about in terms of NEXTRA, in terms of the talent challenges we’re going to have and how we’re going to have to manage all these data centers. Somebody mentioned recently that some of the big data centers in India, for 30 percent of the time it’s down because of the lack of talent to maintain these data centers.

You talked about supply chain. It’s fantastic that India is also thinking about putting data centers in space, which is fascinating. There’s going to be a big debate about, you know, more data centers versus the edge. Like I mentioned to you, you know, there’s one school of thought that we can actually bring the big AI to every factory in India by bringing it on the edge. And the computing power that’s developing is going to make that happen. And the last point I want to say is the point that you mentioned. And I think this is going to be the key takeaway for all of us, is that the speed at which technology is changing is so rapid that we’re all going to require continuous learning going ahead.

And it doesn’t matter how old you are, but that’s going to be the biggest takeaway for me, that that continuous learning and upskilling is going to be the key for all of us. So with that, really, thank you very much to all my panelists. And to everybody. And hopefully we can all make an impact around the world. Thank you very much.

Mohamed Kinaanath

Thank you. Thank you. Thank you. Thank you. I’ll be here. Thank you. Thank you. Thank you. Mr. Cana Dipali will sit at the front please take a seat at the front Ambassador Garcia Ambassador Garcia and if anybody wants to Ibu Ayub okay, thank you very much everyone oh, do you want I’m getting my cues from the photographer it’s not my show yet until I start okay it’s the photographer Can we please, and you’d like us to stand up for a group photo? Okay. Thank you very much. It’s very Asia. It’s not an event unless there’s a photo, so thank you very much. All right. Thank you very much. Good afternoon, everyone, and welcome. It’s a real privilege for me to host and moderate this session, Trusted AI at Scale, a Global South Leadership Dialogue, here at the India AI Impact Summit.

Now, this session hits squarely within the summit’s trusted AI pillar, and deliberately so. Because trust is no longer a downstream concern, it is now the condition for scale. Across governments, enterprises, and societies, we are moving past the question of whether AI will be adopted. The real question is whether it will be trusted by citizens, by institutions, and across borders. So why this session, and why did ISA host this session? The framing for today’s conversation is very intentional. Much of the global AI governance debate is still shaped by frameworks emerging from the Global North, the US, Europe, and China. Those frameworks are important, but they are not sufficient for the lived realities of the Global South, where AI is often deployed at population scale, under real resource constraints.

and in context where the cost of failure is not abstract, it is social, economic, and political. This is precisely the gap that AI Safety Asia was created to address. I am one of the advisors of ISA, and our mandate is straightforward but ambitious to bridge the global north and the global south on AI governance, not by importing templates wholesale, but by co -designing governance approaches that are interoperable, pragmatic, and grounded in local institutional strengths. And we do this through the three pillars, collaboration, capacity building, and policy -relevant research. And what makes this session different? That brings me to expectations. This session is not about abstract principles or ideal end states. We are here to surface operational blueprints, how trust is built in practice, and we have an amazing panel that will hopefully be able to really bring that to the table.

Thank you. and how safety is governed under real constraints, how AI systems actually reach the people and states often struggle to serve. The speakers you will hear from today are not theorizing from a distance. They are governing, financing, regulating, and deploying AI in the real world, from small island states to large democracies, from welfare delivery to financial systems, from regional cooperation to enterprise risk management. So one final framing point before we begin. The goal of today’s dialogue is not to position the Global South as a passive recipient of AI governance norms, and we’ll hear definitely from Cambodia, from the Maldives, and Indonesia, and Brazil. It is to position the Global South as a co -author of those norms, contributing models of governance that are population -scale, institution -aware, and grounded in lived social reality.

That is the through line of this session, from why trusted AI matters to who it must reach to how it is enabled, governed, and ultimately operationalized. With that, I’m delighted to open this dialogue, and we’ll begin with opening remarks that set the stakes why trusted AI is existential and not abstract. And then we’ll move through discussion. I realize that time is very short, so I think one of the reasons Ed put me here is because I’m known to crack a whip a bit. So with all due respect, I know you’re all very important people, but I will let you know when the time is up. So with that, I would like to invite His Excellency Professor Mohamed Kinanath, Minister of State for Homeland Security and Technology from the Maldives.

Your Excellency. Your Excellencies, Distinguished Head Supporters. Delegations, Honorable Ministers. esteemed leaders. It is both a privilege and profound responsibility to stand here before not merely a representative from Republic of Maldives, but a voice for many CIDs, small island developing states. I extend my warmest gratitude to the organizations of this forum for creating a platform where the aspirations of nations regardless of their geographical size can be heard alongside the strategies of those leading the frontier of innovation. Ladies and gentlemen, if the global disclosure turns to AI, it is often centered on the ambition of large economies on the computing scale or on a trillion dollar geopolitical competition. And while these dimensions significant, this represents only part of the narrative for seeds like the Maldives.

Nations defined by geographical dispersal of small islands, 1 ,200 islands, narrow economy base, and acute exposure to climate change. AI is not a matter of competitive advantage alone. It is a matter of institutional resilience. It is a matter of sovereign capacity, and increasingly, it’s a matter of survival. The Maldives comprises nearly 1 ,200 remote islands, which is spread across 850 square kilometers. Our economy has been mainly based on tourism. Our exposure to sea -level rise remains among the highest of any nation on Earth. These realities do not diminish our ambitions, and they demand we adopt technologies that can deliver public services efficiently across vast distances, strengthening governance and diversifying the economic foundation. The government of the Maldives, under the leadership of our current president, launched a Digital Transformation Agenda, a comprehensive national vision to transform the Maldives into a digital first nation within the coming three years.

The technology vision is called Maldives 2 .0. It is not a technology initiative in isolation. It is a fundamental reimagination of how these states serve its people and how the economy grows and how opportunity reaches every citizen of the Maldives. We have already begun the implementation. Maldives have good technology infrastructure if you look at the region. We have the highest – we have one of the highest internet penetration networks in the region. We have the highest number of mobile subscribers in the region. Our population is half a million. We have mobile subscribers, 1 million. 4G coverage is 100%. 5G is 80%, one of the highest in the region. Six subsea cables. And also fiber to each household is 100%. So maybe some of the European countries have not even achieved these statistics.

So considering the delivery of the AI and also considering the geography of the Maldives, AI is very important for us when it comes to the health sector, education sector, since our islands are very remote. So AI intelligence also offers the Maldives a pathway to economic diversification, enabling us to develop a knowledge economy. To cultivate local technology enterprises and to position our youth. Thank you. digitally. The Maldives is not approaching AI without preparation. We are building governance structures to ensure that this technology serves our people ethically. In July 2025, Maldives launched the AI Readiness Assessment Methodology Report, which was developed with assistance from UNESCO. And this landmark assessment, the first of its kind in South Asia. So building on this assessment, the government is now advancing to develop a national AI master plan and also an AI Act, which is also underway.

So the UNESCO Readiness Assessment has further recommended the establishment of an independent AI governance body and multi -stakeholder advisory council. So these are some of the recommendations. The UNESCO Readiness Assessment has further recommended the establishment of an independent AI governance body and multi -stakeholder advisory council. So these are some of the of the report. So as I told you, Maldives is getting ready for the AI and also since we have this Maldives 2 .0 transformation mission, we are working very hard in the next three years to get digitalization complete in the Maldives. So excellencies, the Maldives may be small in land, but we are vast in determination. We are a nation that has built its identity upon resilience, resilience against the tides that shape our shores.

So Maldives 2 .0, I have said that the vision is our commitment to the future. AI deployed responsibilities governed ethically is central to this vision. We do not seek to replicate the digital trajectories of large nations. We seek to chart a course that is authentically ours. On that reflection, our values, address our vulnerabilities. As the world convinced to deliberate on the governance of AI, let us build on AI future, which is inclusive, intelligent, equitable, and as human as the technology is powerful. So thank you so much. Thank you

Moderator

so much, Your Excellency, Minister Kananath. I’d now like to invite Dipali K hanna, Senior VP and Head of Asia for the Rockefeller Foundation for her remarks. Just before

Dipali Khanna

I start, we were talking about global north, global south. What struck me in this panel is the women are at the periphery, right? So anything and everything that we’re going to do in this space, we’ll have to get women back in the center. But I was also excited that we have strong women who can manage these men. So anyway. Good afternoon, Ministers, Excellencies, colleagues, and partners. Let me begin by thanking AI Safety Asia for convening this dialogue and JPMorgan Chase for co -hosting. The fact that this conversation is happening here in this region with this leadership really matters. PM Modi in his keynote yesterday laid out the vision for Manav, building AI that is safe, ethical, and centered on people, ensuring technology serves humanity responsibly and benefits everyone, including women, right?

We’ve just heard powerful perspectives that bring the point to life. From the Maldives that AI is not abstract policy, it is a survival tool. I know a colleague from Togo couldn’t join, but I’m sure she would have mentioned that trusted AI can make the invisible visible. So the question before us is not why AI matters or who should benefit. It is how we build it responsibly, at scale, and with legitimacy. For over 100 years, the Rockefeller Foundation has leveraged, advanced technologies for betterment of society, and we believe that there are learnings from that work also to apply trusted AI. partnership, patient capital and institutional strength. What distinguishes success stories like Togo’s Novici and India’s Coven is not just technological sophistication, it is alignment.

Governments willing to move decisively, private sector actors willing to collaborate, technologists willing to design for public systems and catalytic capital willing to absorb early risk. Novici reached nearly a million informal workers, not in months, just in days. Coven delivered at population scale with transparency and interoperability built in. This was not mere luck. These were examples of ecosystems working together. That’s partnership. For adoption, users must trust both that AI will deliver the benefits without harm. Much like early vaccine development, we need to invest in both supporting users, to adopt the technology, as well as building robust evidence and systems that ensure safety. And scaling this trusted AI in the global south requires more than venture timelines.

It requires risk tolerance. It requires capital that understands that building sovereign AI capacity involves experimentation, regulatory iteration, and institutional learning. Philanthropy can truly play a catalytic role here, not by replacing markets, not by dictating governance, but by re -risking what some leaders have described as the smart adopter model. The smart adopter does not wait for perfect consensus. It adapts responsibly. It pilots with guardrails. It builds local institutional muscle alongside technical capability. Catalytic capital can support regulatory sandboxes, independent safety assessments, talent pipelines, and interoperable standards so that adoption is both fast, nimble, and short -lived. That’s the power of patient capital. And finally, institutional strength. Digital public infrastructure has shown us something profound. Trust must be designed from day one, not retrofitted after deployment.

Transparency, auditability, grievance redress, open architecture are not compliance burdens. They’re adoption accelerators. If our AI systems are to scale in health, climate resilience, food systems and financial inclusion, they must be built on institutional foundations that citizens recognize and most importantly trust. Businesses have a critical role here. Responsible innovation is not simply about internal governance frameworks. It is about long -term partnership with governments and societies. It is about seeing trust as a strategic infrastructure, not friction, because trusted systems scale, untrusted systems stall. The Global South is demonstrating that it does not need to choose between speed and safety. It can design both. The opportunity now is to align partnerships and patient capital behind that leadership. So that trusted AI at scale is not a slogan.

It is operational. The Rockefeller Foundation stands ready to continue playing a catalytic role in that journey because trusted AI is not simply a governance aspiration. It is a development imperative. Thank you

Moderator

Thank you so much to both your Excellency Minister Kananath and Diwali for the… I thought, again, it’s a great way to start us off for the discussion today. You’re welcome to stay in front, sitting in front, but we’ll start the discussion. Actually, I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted.

I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. so everyone that you see in front brings a very specific experience, skills and coming either from private sector or their government so I would love to in particular as Dipali mentioned, building trust starts from the beginning, it’s not an afterthought so I’d like to start with Under Secretary of State Your Excellency Sokeng I guess the question I’m going to have for everyone is what is the single biggest obstacle to operationalizing trust in your context based on your experience and what is it that this room that’s filled with quite a lot of people from different sectors what can we do about it as well and what you’ve heard as well in these past couple of days as you’ve been here in the summit

Son Sokeng

First, thank you Imah for for the very good set of tones of these discussions and I’d like to thank Asia AI Safety Asia for having me on these panel discussions from Cambodia perspective I would say a short answer to that is how to get the people familiar with that AI and that would start off with the people user the leader and the regulator and aside from that I can talk a little bit about the Cambodia experience on that similar to what Excellency Minister from ADEA mentioned Cambodia has began to back the journey of conducting the AI readiness assessment supported by UNESCO we completed last year in July 2025 and from that perspective we can understand rely on the recommendations and starting to think about what is the strategy for Cambodia to move forward in terms of AI adoption in Cambodia.

So based on the recommendation, the national AI strategy has been drafted and currently we are in the process of finalizing the national AI strategy. At the same time we are also drafting the national AI governance framework, keeping the national strategy in mind. And one of the key strategic priorities that we have in the national strategy is people, which is the first priority of our national strategy ranking from the user like I mentioned earlier, user the leader, regulator and also government official. The second priority is the infrastructure and data. The third and the first one is AI adoption in government and the private sector. The fifth strategic priority is the governance and the last one is cooperation and the research.

So based on this priority we can see that human is still the first and the key priority action for Cambodia. Building on that our draft AI governance framework also very much human centric so we believe that the governance should be aligned with the risk of AI. So context of our governance framework would be based on the risk assessment and to understand the risk people have to know the impact of the AI. So government has very clear intention is that we need to educate people, let people understand what is the AI tool and the implication of that. So since 2024 government introduced because Cambodia Digital Skills Roadmap, which we outline what is the plan for the next 10 years for Cambodia in terms of human development.

And our goal is that in the next 10 years, we will have 100 ,000 talents who are AI -ready. And in addition to that, we also introduced various programs to educate government officials. As of now, we have trained more than 10 ,000 government officials. Basic digital skills, and also part of it is the AI skill as well. And so based on what we have right now, if one thing that you ask me what we can do in this room is that I would like to say is to increase the capacity of humans to understand the risk of

Moderator

Thank you so much. I have so many questions but I’m going to hold them for now I’m going to move to Ambassador Garcia as a tech ambassador for Brazil as a G20 country leaving BRICS and all of these along with Indonesia as well for G20 as I mentioned the global south must be architects and not observers and I believe Brazil is at the forefront of this can you say a little bit more about that and what obstacles do you find

Eugenio Vargas Garcia

capabilities to harness the power of the technology so somehow we need to enhance our one national capabilities but in cooperation with other partners overseas and finally we only had COP30 as you remember last November and we included digital technologies and climate change as a sustainability problem because now we have been discussing in terms of data centers energy efficiency so sustainability is key in a way that we are always trying to send this message in terms of AI development oriented strategy and I think for the global south it’s important that we engage in tech diplomacy because otherwise we will not get hurt to do what we are doing and we will not be able to do what we are doing to do what we are doing heard and we need to speak up and our voice be heard where it matters Thank you so much

Moderator

So moving from one great nation in the south to another one, Ibu Ayu Ibu Ayu is the director of AI and Emerging Technology Ecosystems of the Ministry of Communications in Indonesia Ibu Ayu, can you tell us a little bit more about Indonesia’s national strategies, basically where do you find the obstacles as well and as Ambassador Garcia mentioned in the ecosystem of BRICS where does Indonesia or where is ASEAN sitting on this as well

Aju Widya Sari

Thank you Ima It’s an honor for me to sit here from Ministry of Communications and Digital Affairs I cannot say it is an obstacle, I say it’s challenging once you know mentioning about the challenging one Indonesia has many things to be resolved. One is the infrastructure, because you know that our penetration of broadband, especially for mobile broadband, even though it is above 95%, but it is still based on 4G coverage. You know that AI, we need more coverage for 5G. And then the penetration of fixed broadband and backbone is quite low, because you know Indonesia has hundreds of district area and 10 ,000 of sub -district area. Today, the penetration is still 70 % by sub -district area. That’s why we need to push the penetration of the backbone.

Regarding of the data center, today our providers of data center… We have many data centers, but the GPU basis is still limited. So I think we need to invest more regarding the processing for AI. And then this is relating to the infrastructure. And relating with this framework of regulation, right now in Indonesia we have set up the roadmap of AI, national roadmap AI, and also we are preparing the guideline ethical of AI. Talking about the national AI roadmap, we are sure that we need to have strategies that are real, not just theoretically. Because, you know, when we explain, execute our vision in national roadmap, We have four strategic directions. One is governance collaborative, and the second is encouraging innovation ecosystem.

And the third is to strengthen capabilities and capacities, including infrastructure. And the last is mitigating risk. You know that this national roadmap is important for us because we need clarity for five years ahead. Regarding to the issue that come from AI. Regarding to the ethical AI guideline, we set up the rule and clarity of responsibility of AI actors. And also we preparing the instrument for monitoring and evaluation. Because you know, ethic not just ethic, we have to monitor it one. And the last is we have… We have to put the safeguard for the people, how they using and develop AI. That’s the main thing that we preparing.

Moderator

Thank you, Ibu Ayu. I think it’s been mentioned already, and I’m glad Ambassador Garcia mentioned it. And, of course, in his address, Minister Kinanath mentioned, and I’m going to turn to Dr. Parakana. You did mention something about, in our, sorry, I’m going to bring the discussion that we had in the green room. I’m calling it the green room. It’s not green. It’s green, about how you, as a private company, utilize and how that could be beneficial for climate resilience and development, especially in vulnerable countries. So can you tell a little bit about that? Dr. Parakana, of course, is the founder and CEO of AlphaGeo.

Parag Khanna

Thank you. Thank you so much, Ina. Thank you all for being here. Well, AI as a concept evokes this notion of leapfrogging. Do you remember when we used to use that? We used to use that term all the time, leapfrogging. And, of course, it applied very appropriately to mobile telephony, to fintech, to renewable energy, solar panels. So, you know, faster, better, cheaper. and inherent in that concept, which is very important now when we talk about AI, is the notion of second mover advantage. That’s what leapfrogging was fundamentally about, having second mover advantage. Now, why that’s relevant right now is because, of course, developed countries in particular, particularly the United States and others, have invested enormous amounts of capital in the capex requirements of AI.

You know, some significant percentage of U .S. GDP growth, for example, is attributable right now to that AI capex -related infrastructure investment. But that is not something, of course, that nations of the global south, so to speak, developing countries can afford. And so the question becomes, is there an advantage to late development when it comes to AI that can save developing countries of the south a lot of money while still enjoying the fruits of that innovation? So it’s important to not, especially in the context where you’re making trade -offs, between electricity, food, water, the basics for your population, while, of course, there’s now this almost emotional and hype pressure to invest, you know, to clear land, to build data centers, to divert energy, we have to ask ourselves, is that the right way to allocate capital?

Or should one be taking advantage of cloud computing, edge computing, sovereign cloud solutions that can generate the same or better output, bang for your buck, with less CapEx expenditure? And that’s the moment that we’re at. And it’s important to remind that we’re having this conversation in India. And one of the virtues of India hosting this event is precisely that India’s rise as an AI superpower, you know, breaks the narrative out of this conventional wisdom that it’s a two -horse race between the U .S. and China, and that you’re doomed in some way to choose between your data being, you know, hoovered up by one or the other player. what India is offering, at least more than in theory, is rapid diffusion of the latest technologies, cloud -based models and solutions, through the tools of digital public infrastructure, DPI, which has been one of the benefits of this event now over the course of the years.

People have learned this all -important acronym, DPI, which I’ve adhered to or believed in, been a supporter of for quite some time, because it does hold the promise for neutrality, for a menu of options, for being delivered in a way that protects sovereignty of data, but in a very affordable way. And India has done it, been a pioneer of it, obviously domestically, with Adar and so forth. And for those, if it hasn’t been disclosed enough this week, more than 50 countries are building payment systems and identity systems on that stack. So that’s a great example of DPI. So think of AI in that mold of second -mover advantage, leapfrogging, following the mold of cloud -based sort of solutions that can be low -cost.

Now let me just quickly talk about two areas where there are huge gaps in public sector access to data or ownership or simply knowledge of solutions that AI, and particularly the AI that we, the way in which we apply data science, is to geospatial data. And these are also two areas that are of critical, fundamental, if not existential importance to developing countries. One is sustainable urbanization, and the second is climate adaptation. Anywhere you go in the world, actually rich or poor countries, if you survey the average person on the street and ask them, what is the biggest problem plaguing your society, nine out of ten people that I speak to in dozens of countries around the world is affordable housing.

And I think that’s a really important part of the problem. just sustained urbanization that is so organic, so rapid, so unplanned accelerating around the world but now finally governments have the tools again, geospatial tools, mapping tools understanding which districts, which settlements are expanding and why where are people coming from what kinds of housing need to be built where governments have always been fighting backwards or if not given up, quite frankly, on grappling with these issues but now we have foresight AI -powered geospatial tools that can look decades ahead and say this has been your time series urban expansion this is how you map it out this is where you should be building what and when and so bringing together demographics bringing together infrastructure bringing together migration, fiscal spending and directed targeting in a way that is a great use case for AI that almost the entire development developing world could use and has barely begun to use and is very cost -effective, right?

So that’s number one. The second, equally if not more important, is climate adaptation. Climate risk is both acute and chronic. We’re talking about monsoons, floods, fires that are becoming more frequent and devastating. And, of course, complex climate modeling is not the kind of thing that any individual country can or should finance. We have global climate models that are AI -powered, that are developed with the world’s best institutions, that are publicly financed, that are now available to be downscaled for your country. And this is something that, again, especially developing countries, especially countries of the south, that are most affected by climate dislocation, by climate risk, can and should take advantage of. But, again, to be clear, they have barely begun to do so yet.

So targeting infrastructure investments, targeting your infrastructure to adapt to climate risk, where do you need to build? Seawalls, flood barriers, flood control measures. irrigation systems for drought, all of that, we are again, just like urbanization, are years and years and years behind, and there’s almost no country on this panel, almost no country in the world, even wealthy countries, that are ahead of the curve on this. The entire planet is behind, as we know from COP summits, but it is countries of the south that are going to be the worst affected on the fastest timeline. If you’re not using the tools that are available right now, AI -powered climate modeling, scaling, adaptation scoring, in order to plan your national infrastructure, and then putting together the public -private partnerships to get it done, you’re behind.

So this is about global public goods, right? Affordable housing, a manageable urbanization at a global scale, climate adaptation for people everywhere, but it has to be delivered locally. And that’s why it’s incumbent on each nation represented here, each nation of the global south to really harness and take advantage of these tools.

Moderator

Thank you so much, Dr. Parag. Again, lots of questions in my head. Kip Wainscott, Executive Director of Global AI Policy from JPMorgan Chase, one of our biggest supporters as well. Thank you so much for supporting us for this event. You are not here just because of that, I can assure you. But I’d love to hear what you have to say, particularly the question that I brought forward earlier about the obstacles, in particular in where you see it. But again, financial services, model risk management, and all of that in the safety architecture of AI. Yeah. Go ahead.

Kip Wainscott

Thank you. A lot to unpack there. This is a great panel, by the way. So many esteemed panelists. I really feel privileged to share the microphone with all of you. You know, it’s interesting thinking about these things from the vantage of JPMorgan Chase because we’re really kind of interrogating the questions from sort of three different perspectives, right? One is one of the world’s largest financial houses that’s deeply invested in artificial intelligence. We have an acute interest in unlocking the value of this technology and seeing the growth potential of this technology. But there’s a simple truth that we recognize, and that is that AI is only valuable if it is deployed, and deployment depends on trust.

And so really building out, you know, we have an interest in this sort of multi -stakeholder dialogue about what that trust model that is going to unlock diffusion, you know, not just across enterprises, but in the public sphere across the global south, and really putting this technology into organizations that are impacting people’s real lives. The second perspective from which we’re looking at all of this is as a deployer of the technology ourselves. We are one of the world’s largest deployers of AI. And what’s interesting, we’re also one of the most regulated industries in the financial sector, and yet financial services have been the earliest adopters of AI. We’ve been using artificial intelligence through this sort of evolutionary ramp of really more than a decade.

To combat fraud, to protect consumers, to create just more efficient personalization of financial services. And I think one reason why you see financial services companies so ready and eager to adopt is because we have that existing trust architecture. Trust isn’t just a feature of financial services. It’s the core business model. And so we have… Thank you. We have these, you mentioned, you know, model risk management. We have these rigorous practices of evaluating models, of documenting governance and oversight, of really ensuring that there’s ongoing monitoring across all of our technology deployments in a way that just lends itself to what I would call a comfort in sort of building the trust ecosystem for responsible deployment. And then sort of the third prism that we look at this issue from is as a purchaser of these technologies and like kind of almost a procurement lens here.

We spend $20 billion annually on our technology budget. That puts us in this really sizable position in the innovation ecosystem of, you know, startups and scale -ups that they want to sell their products. They’re building innovative new artificial intelligence applications with the hope that they’re going to be able to sell their products to the world. Selling it to JPMorgan Chase. And we see a real innovation. inefficiency right now in the fact that there isn’t a shared sort of set of expectations for trust, for, you know, what these products should be benchmarked against in order for us to ingest them, you know, in a way that, you know, we have the confidence is going to serve our customers well, is going to reflect our, you know, our responsibilities, our duty of care as a, you know, a regulated industry.

And so, you know, it speaks to, I mean, I think one, the need to bring these diverse perspectives to this conversation around governance so that we can really kind of get past the sort of compartmentalization of like AI safety as sort of a siloed conversation and accelerated AI adoption as a different conversation. This is the same conversation. And, you know, what we really need to, I think the purpose of both of those conversations is to really get past the sort of compartmentalization of AI safety as sort of conversations is to really align on this trust model that is going to ensure that we can deploy these technologies, you know, in a very broad and impactful way across the economy.

Moderator

I’m going to put you on the spot while you have your microphone a little bit you’ve been here throughout the week and you’ve heard the panelists just now speak from what you’ve been hearing throughout the week how optimistic are you in terms of I think His Excellency Sokeng mentioned about collaboration how confident are you in building these collaborations to build trust in AI just from the conversations that you’ve had this week?

Kip Wainscott

Yeah, no I’m optimistic I think just the fact that we’re here in New Delhi and having this summit in this environment this is a much bigger summit, I think it’s a more inclusive cross section of voices and so I think that that reflects that this conversation is getting bigger that we’ve moved past this focus on that technical capabilities, which is kind of where we have been, to now I think capability has really been almost commoditized and legitimacy has not. And we’re in this phase now where we really need to establish the legitimacy of these technologies and that they are fit for purpose, that they can be trusted and deployed across these different societal sectors. And so I think that I am optimistic.

It requires intention. And I’m seeing the intentionality, I think, around the curation of these conversations. I think there’s a lot to carry forward here. Also, some of you may have seen we’re very near the end of this summit. And I think before we were even halfway through, excuse me, I’m running on fumes at this point, but we were more than halfway through the week and people were already writing up the assessment. Of what, you know, what the themes were, what the takeaways were. and people were saying, you know, oh, this summit is no longer about, you know, responsibility or safety or, and I just, that isn’t my perception of these conversations. It really is that, you know, we’re just talking about them in a different way.

We’re talking about them in how they are going to impact real lives, how we can take this technology into the economy in real valuable ways. And in order to do that, we have to include that sort of trust dimension.

Moderator

Okay. Thank you so much. We have eight minutes left, and I’ve been told that we have to finish on time, but I really want to get this question in and hopefully be able to hear from everyone. So Ambassador Garcia and Your Excellency Sokeng and Ibu Ayu, what are you taking away? What are you taking home from this panel, first of all? Or from the week that you’ve been here, reflecting on what Kip just mentioned?

Eugenio Vargas Garcia

Yes, thank you. First, I think India was very successful bringing this summit to the Global South for the very first time. But this was the Bletchley process that began in the UK in 2023. We have Seoul, Paris. So this, what we have been discussing here, is something that is more inclusive. And some new concerns were added to the agenda, sustainability, not that it was not discussed before, but with this perspective coming from the Global South, which is important. So I would conclude with three recommendations, because we need to be practical. We are thinking mostly we agree on high -level principles in terms of AI governance. But when we think of countries lacking resources, or having other competing priorities, so they need to decide what to do and prioritize in many cases.

So I think they should start small and have a few small scales. quick impact projects so that they can build on proven success so let’s say focus on some education, healthcare, agriculture then focus on some specific projects and then build to reach the next level second is that we need to seek they need to seek international partners sometimes it’s it’s useful and needed to enhance national capabilities it’s difficult for a single country alone to do this investing in infrastructure and do something that’s expensive so seek international cooperation and third, as I said before engage in these discussions at an international level engage in tech diplomacy and send more people to discuss where I think it’s important including the United Nations thank you so much

H.E. Sokeng

Thank you. Having seen the time, I’ll just go very quickly on the last sentence. Coming to this summit, I agree with our parents that it’s very inclusive and we can see perspective from all the stakeholders, from the government, the industry, academia, and even the startup. So learning from this, I have just one wish, which is that we have to be honest with each other, the industry, the government, and bear in mind that we are here to protect people for the people. So whatever we do, we need to think about people first. With that, please consider that when we think of governance frameworks, the regulation of the law that the government might put should be the mechanism to promote innovation.

It’s not an obstacle. It’s not an obstacle for the innovation. So in order to do that, we need to build trust also, and we need to be honest with each other. Thank you.

Moderator

Ibu Ayu, quickly.

Aju Widya Sari

Thank you. Actually, I’m very impressed with the spirit of Prime Minister Modi yesterday. I think every country has the same spirit regarding to the AI. So the three points that I’m taking from this summit, one is collaboration, indeed, and then inclusive, because if we consider about inclusive, we need intention from government, from industry, from the people. And then the last is investment, because investment, you know that AI needs more and more investment. This is a collaboration come. But the issue will come is how we define the sovereign, because sovereign is based on the… the needs of the country. how we define, is it equal or not? And still, it’s an under question of me also.

Moderator

Thank you so much, Iwayu. Dr. Parag, I’m going to have you bring it home in one minute. What you’re taking home, in particular in your conversations with some of the different governments here.

Parag Khanna

Well, the first thing is I actually want to echo Kip’s point is that we’re at an inflection point where we can’t, we’ve been, in phase one, let’s say, there was a lot of harping about trust. Can we trust? Can we not trust? And I think it’s a good thing that that pressure was there, but now that pressure to have transparency in models has delivered to some degree. And that it’s been done in a way where public and private have not been on opposite sides of the discussion, but have really partnered. So I think we’re really beyond that. And now we can move from models and theory into action and application. And that’s the part of the stack that we want to be on.

The infrastructure build -out is there. It’s being provided. The apps are being developed. They’re being deployed. I have seen a little bit of but would want to see a lot more in subsequent editions of this, especially as this summit, you know, migrates around the world now and remains perhaps in the hands of developing countries on the application side as much as possible and that we think not just about very specific verticals as we have been here and elsewhere, sort of your health care, education, I’ve emphasized climate and others, but probably something more societal and around resilience. You know, resilience is a term that comes up a lot but doesn’t really get quantified enough. And if we can push for that, that’s going to help us to establish performance benchmarks, not just of models but in applications.

And that’s really what I think everyone wants to see to make sure that AI doesn’t become something of not just a financial bubble but something almost of a policy bubble as well.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Satvinder Singh
6 arguments175 words per minute1913 words652 seconds
Argument 1
AI will create collaboration and enhancement rather than displacement of human workers
EXPLANATION
Satvinder Singh argues that AI’s current impact is primarily on collaborative augmentation rather than full automation and complete replacement of human workers. He emphasizes that governments will implement policies to prevent total displacement of important jobs by AI.
EVIDENCE
Studies by Anthropic on AI jobs and security show that AI is currently impacting certain segments of the economy, with the biggest impact on white-collar jobs rather than blue-collar jobs, and mostly involving collaborative augmentation rather than full automation.
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
AGREED WITH
Dr. Mahendra Karpan, Vinod Jhawar, H.E. Sokeng
Argument 2
AI impact is currently greater on white-collar jobs than blue-collar jobs, with collaborative augmentation being the primary model
EXPLANATION
Singh explains that current AI deployment shows greater impact on white-collar positions through collaborative augmentation rather than complete automation. This pattern suggests that blue-collar workers may face less immediate displacement risk.
EVIDENCE
Anthropic study data showing AI’s impact is concentrated on white-collar jobs with collaborative augmentation being the dominant model rather than full replacement.
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
AGREED WITH
Tejpreet S Chopra, Son Sokeng, Vinod Jhawar
Argument 3
Governments will implement policies to prevent complete replacement of important jobs by AI
EXPLANATION
Singh asserts that governments are actively discussing and will establish policies to ensure AI doesn’t completely replace critical high-level positions in society. He emphasizes this is becoming a central concern for policymakers.
EVIDENCE
Ongoing closed-door conversations among governments and policy makers about establishing barometers and rules to determine what AI can or cannot do in terms of job replacement.
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
DISAGREED WITH
Narendra Singh
Argument 4
Digital Economy Framework Agreement (DEFA) will create the world’s largest regional digital agreement connecting 700 million people across 11 ASEAN countries
EXPLANATION
Singh describes DEFA as a legally binding digital agreement currently being negotiated to digitally interconnect and make interoperable 11 ASEAN countries. This initiative aims to enable better business operations and economic growth across the region.
EVIDENCE
DEFA negotiations have been ongoing for two years and are expected to complete by March. It will be the largest regional digital agreement in the world and is legally binding.
MAJOR DISCUSSION POINT
Digital Public Infrastructure and Accessibility
Argument 5
Least developed countries will benefit most from digital connectivity initiatives
EXPLANATION
Singh argues that while advanced economies like Singapore are already digitally connected, the greatest beneficiaries of DEFA will be least developed countries like Laos, Cambodia, and Myanmar. These countries can leapfrog to the latest connectivity technologies at lower costs.
EVIDENCE
Data showing that the impact of DEFA will be greatest for LDCs in terms of jobs, prospects, and economic growth because they can move directly to the latest connectivity technologies at the lowest cost.
MAJOR DISCUSSION POINT
Digital Public Infrastructure and Accessibility
Argument 6
ASEAN region shows that businesses are deploying AI profitably in day-to-day operations
EXPLANATION
Singh highlights that studies show Southeast Asia and ASEAN, along with India, are regions where people are seeing the most profitable translation of AI use in business operations. This demonstrates practical AI adoption success in the Global South.
EVIDENCE
Studies showing that businesses in Southeast Asia and ASEAN are deploying AI into day-to-day business operations and making significant impact on productivity, growth, and relevance.
MAJOR DISCUSSION POINT
Regional Cooperation and Technology Diplomacy
D
Dr. Mahendra Karpan
2 arguments134 words per minute1394 words622 seconds
Argument 1
Healthcare will maintain human touch for patient comfort and reassurance despite AI diagnostic capabilities
EXPLANATION
Dr. Karpan argues that while AI can provide accurate diagnosis and treatment recommendations, the human element of comfort, warmth, and reassurance cannot be replaced by AI, especially in emergency situations and terminal care.
EVIDENCE
Examples of emergency room scenarios with scared parents of asthmatic children, and end-stage cancer patients needing human warmth and hand-holding during difficult stages of life.
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
AGREED WITH
Satvinder Singh, Vinod Jhawar, H.E. Sokeng
Argument 2
Telemedicine and remote healthcare delivery can serve dispersed populations effectively
EXPLANATION
Dr. Karpan describes how Guyana has implemented telemedicine across over 200 sites to serve remote indigenous populations. This system allows community health workers to provide real-time diagnosis and advice from specialists using video conferencing and Starlink connectivity.
EVIDENCE
Guyana’s implementation of over 200 functional telemedicine sites with Starlink connectivity, serving remote villages where patients can receive EKGs, x-rays, blood pressure monitoring, and specialist consultations. Example of an 18-year-old boy from an interior village who had never seen car headlights before being airlifted for snake bite treatment.
MAJOR DISCUSSION POINT
Digital Public Infrastructure and Accessibility
T
Tejpreet S Chopra
3 arguments197 words per minute2261 words687 seconds
Argument 1
The world will need four times more energy in the next 10-12 years to support AI growth, requiring massive investment
EXPLANATION
Chopra reports from a major CEO gathering in Abu Dhabi that the world will require four times more energy over the next 10-12 years to support data center growth and AI infrastructure. This will necessitate four trillion dollars annually for the next decade.
EVIDENCE
Information from a majlis in Abu Dhabi with 100 CEOs including major oil and gas companies, energy utilities, AI companies, and capital providers, all discussing the massive energy requirements for AI infrastructure.
MAJOR DISCUSSION POINT
Infrastructure Challenges and Energy Requirements
Argument 2
Continuous learning and upskilling will be the key for all of us
EXPLANATION
Chopra emphasizes that the rapid pace of technological change means everyone, regardless of age, will need to engage in continuous learning and upskilling to remain relevant in an AI-driven world.
EVIDENCE
The speed at which technology is changing requires adaptation from all participants in the workforce and society.
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
AGREED WITH
Satvinder Singh, Son Sokeng, Vinod Jhawar
Argument 3
Countries with cheapest energy will win the AI arms race
EXPLANATION
Chopra argues that energy costs will be the determining factor in AI competitiveness globally. He believes India has a significant opportunity due to dramatically reduced renewable energy costs.
EVIDENCE
Personal experience in renewable energy sector: solar farm revenue dropped from 18 rupees per kilowatt hour to 2.20 rupees, and wind farm costs fell from 8.50 rupees to 2 rupees over eight years.
MAJOR DISCUSSION POINT
Infrastructure Challenges and Energy Requirements
DISAGREED WITH
Vinod Jhawar
N
Nihar Shah
2 arguments198 words per minute1151 words348 seconds
Argument 1
Energy, cooling, and water consumption are critical blind spots in AI infrastructure planning
EXPLANATION
Shah argues that while energy requirements get attention, cooling and water consumption for AI infrastructure are overlooked blind spots that need urgent attention. He emphasizes the need for holistic thinking about infrastructure requirements.
EVIDENCE
Shah directs the global cooling program at Berkeley Lab and notes that data center cooling and water consumption are not adequately addressed in AI infrastructure planning discussions.
MAJOR DISCUSSION POINT
Infrastructure Challenges and Energy Requirements
AGREED WITH
Vinod Jhawar, Narendra Singh, Aju Widya Sari
Argument 2
Data center growth has tripled over the last decade and is forecast to triple again by 2028
EXPLANATION
Shah presents data showing the exponential growth of data centers, with growth tripling in the past decade and projections for another tripling by 2028. This demonstrates the scale of infrastructure challenges ahead.
EVIDENCE
Berkeley Lab’s reporting to U.S. Congress on data center growth as required by the Energy Act of 2020, showing historical tripling over the last decade.
MAJOR DISCUSSION POINT
Infrastructure Challenges and Energy Requirements
V
Vinod Jhawar
2 arguments160 words per minute941 words350 seconds
Argument 1
Renewable energy integration is essential for sustainable AI infrastructure development
EXPLANATION
Jhawar explains that Nextra is targeting net zero by 2030-2032 and has contracted close to 400 megawatts of renewable energy. The company is moving from 50% renewable energy to almost 100% renewable energy sourcing for data centers.
EVIDENCE
Nextra has contracted close to 400 megawatts of renewable energy and is connected to India’s central grid with access to abundant renewable energy generators supported by government policies.
MAJOR DISCUSSION POINT
Infrastructure Challenges and Energy Requirements
AGREED WITH
Nihar Shah, Narendra Singh, Aju Widya Sari
DISAGREED WITH
Tejpreet S Chopra
Argument 2
AI tools will democratize opportunities by breaking language barriers and enabling grassroots participation
EXPLANATION
Jhawar argues that AI tools will provide opportunities to grassroots populations by eliminating language barriers, making English no longer a requirement, and enabling people to upskill without formal qualifications through self-learning modules.
EVIDENCE
AI tools are breaking language barriers and enabling blue-collar workers to upskill themselves without needing degrees, becoming specialists, entrepreneurs, or coaches rather than being confined to desk jobs.
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
AGREED WITH
Tejpreet S Chopra, Satvinder Singh, Son Sokeng
N
Narendra Singh
4 arguments183 words per minute889 words291 seconds
Argument 1
India has cost advantages with data center construction at $4-6 million per megawatt versus $12 million in other markets
EXPLANATION
Singh highlights India’s significant cost advantage in data center construction, with costs of $4-6 million per megawatt compared to $12 million in markets like the US, Singapore, and Dubai. This is because 80-90% of required supply chain products are manufactured in India.
EVIDENCE
Cost comparison showing Indian data center construction at $4-6 million per megawatt versus $12 million in US, Singapore, and Dubai markets, with most supply chain components manufactured domestically in India.
MAJOR DISCUSSION POINT
Infrastructure Challenges and Energy Requirements
AGREED WITH
Nihar Shah, Vinod Jhawar, Aju Widya Sari
Argument 2
Government subsidies and support are making AI compute accessible at low costs in India
EXPLANATION
Singh explains that the Indian government has allocated significant resources for AI infrastructure, providing GPU access at subsidized rates and supporting the development of foundation models and the broader AI ecosystem.
EVIDENCE
Government announcement of 200 billion for data center infrastructure, 12 foundation models launched in the country, GPU allocation available to innovators at subsidized rates with government paying full price and subsidizing end users.
MAJOR DISCUSSION POINT
Digital Public Infrastructure and Accessibility
DISAGREED WITH
Satvinder Singh
Argument 3
Entrepreneurial culture must be prioritized in education to help people adapt to AI-driven changes
EXPLANATION
Singh argues that educational systems should focus on developing entrepreneurial skills and problem-solving capabilities, enabling students to identify and solve real-world problems using AI tools rather than traditional coding approaches.
EVIDENCE
India has a billion users and billion problems that can be solved through entrepreneurial approaches, with students encouraged to solve problems at home, college, or school levels using AI tools.
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
Argument 4
Space-based data centers represent future opportunities for critical workload protection
EXPLANATION
Singh describes plans to deploy data centers in space, partnering with space technology companies to serve critical workloads including border protection and unmanned vehicles, taking advantage of India’s evolving space ecosystem.
EVIDENCE
Partnership with Agni Cool space tech company for first mission before end of year, leveraging India’s space ecosystem development over the last seven years with government support for private players.
MAJOR DISCUSSION POINT
Regional Cooperation and Technology Diplomacy
M
Mohamed Kinaanath
2 arguments91 words per minute1484 words975 seconds
Argument 1
Small island developing states need AI for institutional resilience and survival, not just competitive advantage
EXPLANATION
Minister Kinaanath argues that for small island developing states like the Maldives, AI is not merely about competitive advantage but is essential for institutional resilience, sovereign capacity, and survival due to geographical challenges and climate vulnerability.
EVIDENCE
Maldives comprises 1,200 remote islands spread across 850 square kilometers with narrow economic base and acute exposure to climate change and sea-level rise.
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Eugenio Vargas Garcia, Moderator, Son Sokeng
Argument 2
AI offers pathways for economic diversification and knowledge economy development
EXPLANATION
Kinaanath explains that AI enables the Maldives to develop a knowledge economy, cultivate local technology enterprises, and position youth for digital opportunities, moving beyond tourism-dependent economics.
EVIDENCE
Maldives 2.0 Digital Transformation Agenda aims to transform the country into a digital-first nation within three years, with high internet penetration, mobile subscribers, 4G and 5G coverage, and fiber connectivity.
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
D
Dipali Khanna
2 arguments136 words per minute674 words295 seconds
Argument 1
Trust must be designed from day one, not retrofitted after deployment
EXPLANATION
Khanna argues that trust in AI systems must be built into the design from the beginning rather than added as an afterthought. She emphasizes that transparency, auditability, and grievance redress are adoption accelerators, not compliance burdens.
EVIDENCE
Examples of successful implementations like Togo’s Novici reaching nearly a million informal workers in days and India’s Coven delivering at population scale with transparency and interoperability built in.
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Kip Wainscott, Son Sokeng, Moderator
Argument 2
Partnership, patient capital, and institutional strength are key to successful AI implementation
EXPLANATION
Khanna identifies three critical elements for successful AI deployment: alignment between governments, private sector, and technologists; risk-tolerant capital that understands the need for experimentation; and institutional foundations that citizens trust.
EVIDENCE
Success stories like Togo’s Novici and India’s Coven demonstrate ecosystems working together with governments moving decisively, private sector collaboration, and catalytic capital absorbing early risk.
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
S
Son Sokeng
2 arguments128 words per minute501 words234 seconds
Argument 1
AI readiness assessments and national strategies are being developed across Global South countries
EXPLANATION
Sokeng describes Cambodia’s completion of an AI readiness assessment with UNESCO support and the subsequent development of a national AI strategy and governance framework. This represents a systematic approach to AI adoption in developing countries.
EVIDENCE
Cambodia completed AI readiness assessment in July 2025 with UNESCO support, leading to drafting of national AI strategy and governance framework with six strategic priorities including people, infrastructure, adoption, and governance.
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Eugenio Vargas Garcia, Moderator, Mohamed Kinaanath
Argument 2
Human-centric governance frameworks should be aligned with AI risk assessment
EXPLANATION
Sokeng explains that Cambodia’s AI governance framework is human-centric and based on risk assessment, requiring people to understand AI impacts before implementing governance measures. The focus is on educating users, leaders, and regulators.
EVIDENCE
Cambodia’s goal to have 100,000 AI-ready talents in 10 years and training of over 10,000 government officials in basic digital and AI skills through the Cambodia Digital Skills Roadmap.
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Dipali Khanna, Kip Wainscott, Moderator
DISAGREED WITH
Parag Khanna
E
Eugenio Vargas Garcia
3 arguments121 words per minute414 words204 seconds
Argument 1
Global South countries must be co-authors of AI governance norms, not passive recipients
EXPLANATION
Ambassador Garcia emphasizes that Global South countries need to engage in tech diplomacy and participate actively in international AI governance discussions to ensure their voices are heard and their perspectives included in global frameworks.
EVIDENCE
Brazil’s participation in G20, BRICS, and hosting of COP30 where digital technologies and climate change were included as sustainability issues, demonstrating active engagement in global tech governance.
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Moderator, Mohamed Kinaanath, Son Sokeng
Argument 2
Countries should start with small-scale, quick-impact projects before scaling up
EXPLANATION
Garcia recommends that countries with limited resources should begin with small-scale projects in areas like education, healthcare, and agriculture to build proven success before expanding to larger initiatives.
EVIDENCE
Recognition that countries face competing priorities and resource constraints, requiring strategic prioritization of AI initiatives.
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
Argument 3
International cooperation is essential for countries with limited resources
EXPLANATION
Garcia argues that countries should seek international partners to enhance national capabilities, as it’s difficult for single countries to invest in expensive AI infrastructure alone.
EVIDENCE
Acknowledgment of the high costs of AI infrastructure investment and the need for countries to enhance capabilities through international cooperation.
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
A
Aju Widya Sari
1 argument113 words per minute491 words258 seconds
Argument 1
Investment and infrastructure development require collaborative approaches
EXPLANATION
Sari emphasizes that AI development requires significant investment and collaboration between government, industry, and people. She highlights Indonesia’s challenges with infrastructure penetration and the need for strategic investment in backbone networks and GPU-based data centers.
EVIDENCE
Indonesia has 95% mobile broadband penetration but mostly 4G, 70% backbone penetration by sub-district area across hundreds of districts and 10,000 sub-districts, and limited GPU-based data center capacity.
MAJOR DISCUSSION POINT
Regional Cooperation and Technology Diplomacy
AGREED WITH
Nihar Shah, Vinod Jhawar, Narendra Singh
P
Parag Khanna
3 arguments164 words per minute1473 words535 seconds
Argument 1
India’s Digital Public Infrastructure (DPI) model offers affordable, sovereign solutions for developing countries
EXPLANATION
Khanna argues that India’s DPI approach provides a model for developing countries to access AI technologies through cloud-based solutions that protect data sovereignty while being affordable, avoiding the need for massive infrastructure investment.
EVIDENCE
India’s pioneering work with Aadhaar and payment systems, with more than 50 countries building payment and identity systems on India’s DPI stack.
MAJOR DISCUSSION POINT
Digital Public Infrastructure and Accessibility
Argument 2
Second-mover advantage allows developing countries to benefit from AI without massive infrastructure investment
EXPLANATION
Khanna argues that developing countries can leverage leapfrogging strategies similar to mobile telephony and renewable energy, using cloud computing and edge computing solutions to access AI benefits without the massive capital expenditure required for infrastructure.
EVIDENCE
Historical examples of leapfrogging in mobile telephony, fintech, and renewable energy where developing countries gained second-mover advantage through faster, better, cheaper solutions.
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
DISAGREED WITH
Son Sokeng
Argument 3
Geospatial AI tools can address critical challenges in sustainable urbanization and climate adaptation
EXPLANATION
Khanna emphasizes that AI-powered geospatial tools can help governments address affordable housing and climate adaptation challenges by providing foresight for urban planning and infrastructure investment targeting.
EVIDENCE
Global availability of AI-powered climate models that can be downscaled for individual countries, and geospatial tools that can predict urban expansion patterns and guide infrastructure development for decades ahead.
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
K
Kip Wainscott
1 argument154 words per minute964 words374 seconds
Argument 1
Financial services demonstrate that existing trust architectures enable successful AI adoption
EXPLANATION
Wainscott argues that financial services have been early AI adopters because they already have robust trust architectures including model risk management, governance oversight, and ongoing monitoring that naturally extend to AI deployment.
EVIDENCE
JPMorgan Chase’s experience as one of the world’s largest AI deployers, using AI for fraud combat, consumer protection, and financial services personalization, supported by rigorous model evaluation and governance practices.
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Dipali Khanna, Son Sokeng, Moderator
M
Moderator
3 arguments165 words per minute860 words312 seconds
Argument 1
Trust is now the condition for scale in AI deployment, not a downstream concern
EXPLANATION
The moderator argues that trust has become the fundamental prerequisite for scaling AI adoption across governments, enterprises, and societies. The focus has shifted from whether AI will be adopted to whether it will be trusted by citizens, institutions, and across borders.
EVIDENCE
The session is positioned within the summit’s trusted AI pillar, and the moderator notes that much of the global AI governance debate is still shaped by frameworks from the Global North
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Dipali Khanna, Kip Wainscott, Son Sokeng
Argument 2
Global South must be co-authors of AI governance norms, not passive recipients
EXPLANATION
The moderator emphasizes that the Global South should actively participate in creating AI governance frameworks rather than simply adopting templates from the Global North. This requires developing governance approaches that are interoperable, pragmatic, and grounded in local institutional strengths.
EVIDENCE
AI Safety Asia’s mandate to bridge global north and south through collaboration, capacity building, and policy-relevant research
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Eugenio Vargas Garcia, Mohamed Kinaanath, Son Sokeng
Argument 3
AI governance in the Global South must address population-scale deployment under real resource constraints
EXPLANATION
The moderator argues that AI governance frameworks for the Global South must be designed for contexts where AI is deployed at massive scale with limited resources, where the cost of failure has direct social, economic, and political consequences.
EVIDENCE
Recognition that Global South contexts involve real resource constraints and that failure costs are not abstract but have direct social, economic, and political impacts
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
A
Audience
4 arguments154 words per minute294 words114 seconds
Argument 1
Hydrogen fuel cells face cost barriers preventing large-scale implementation
EXPLANATION
An audience member from HDI industry and former CSIRC research fellow notes that while hydrogen fuel cells have been tested experimentally in railways and buses, they haven’t been implemented at large scale in India or abroad due to cost considerations.
EVIDENCE
Experimental use in railways and buses but lack of large-scale implementation globally
MAJOR DISCUSSION POINT
Infrastructure Challenges and Energy Requirements
Argument 2
AI pilot companies face technical barriers and scaling challenges with current cost structures
EXPLANATION
An audience member from MindEquity.ai and AI Society asks about the biggest technical barriers for AI pilot companies to achieve full-time impact, highlighting concerns about scaling challenges.
EVIDENCE
Question posed by CTO of AI pilot company about technical barriers to scaling
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
Argument 3
Government AI adoption policies should consider job displacement impacts and cost-effectiveness
EXPLANATION
An audience member raises concerns about government adoption of AI in areas like call centers, where AI costs 7 rupees per call versus 1 rupee for human call center workers, potentially leading to massive job losses without clear policy frameworks.
EVIDENCE
Cost comparison showing AI at 7 rupees per call versus human call center workers at 1 rupee per call
MAJOR DISCUSSION POINT
AI-Driven Workforce Transformation and Job Impact
Argument 4
AI subsidies could catalyze adoption similar to India’s solar revolution
EXPLANATION
An audience member suggests that government subsidies for AI projects could serve as catalysts for widespread adoption, drawing parallels to how subsidies and information campaigns helped drive India’s solar energy revolution.
EVIDENCE
Historical example of solar energy costs dropping from 18 rupees to 2.20 rupees per kilowatt hour through government support and subsidies
MAJOR DISCUSSION POINT
Economic Growth and Development Applications
H
H.E. Sokeng
2 arguments144 words per minute158 words65 seconds
Argument 1
Honest collaboration between industry and government is essential for people-first AI governance
EXPLANATION
His Excellency Sokeng emphasizes that successful AI governance requires honest collaboration between industry, government, academia, and startups, with all stakeholders keeping people’s protection as the primary focus. He stresses that government regulations should promote innovation rather than create obstacles.
EVIDENCE
Observation of inclusive perspectives from all stakeholders at the summit including government, industry, academia, and startups
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
AGREED WITH
Satvinder Singh, Dr. Mahendra Karpan, Vinod Jhawar
Argument 2
Government regulations should be mechanisms to promote innovation, not obstacles to it
EXPLANATION
Sokeng argues that when governments develop AI governance frameworks and regulations, these should be designed as tools to facilitate and promote innovation rather than creating barriers. This requires building trust and maintaining honesty between all stakeholders.
EVIDENCE
Recognition that governance frameworks and regulations need to balance protection with innovation promotion
MAJOR DISCUSSION POINT
Trusted AI Governance and Global South Leadership
Agreements
Agreement Points
AI should enhance and collaborate with humans rather than replace them
Speakers: Satvinder Singh, Dr. Mahendra Karpan, Vinod Jhawar, H.E. Sokeng
AI will create collaboration and enhancement rather than displacement of human workers Healthcare will maintain human touch for patient comfort and reassurance despite AI diagnostic capabilities AI tools will democratize opportunities by breaking language barriers and enabling grassroots participation Honest collaboration between industry and government is essential for people-first AI governance
Multiple speakers agreed that AI should augment human capabilities rather than replace humans entirely, emphasizing the irreplaceable value of human touch, collaboration, and people-first approaches
Continuous learning and upskilling are essential for AI adaptation
Speakers: Tejpreet S Chopra, Satvinder Singh, Son Sokeng, Vinod Jhawar
Continuous learning and upskilling will be the key for all of us AI impact is currently greater on white-collar jobs than blue-collar jobs, with collaborative augmentation being the primary model Human-centric governance frameworks should be aligned with AI risk assessment AI tools will democratize opportunities by breaking language barriers and enabling grassroots participation
Speakers consistently emphasized that the rapid pace of technological change requires continuous learning and capacity building for all stakeholders to adapt to AI-driven transformations
Infrastructure development is critical for AI deployment
Speakers: Nihar Shah, Vinod Jhawar, Narendra Singh, Aju Widya Sari
Energy, cooling, and water consumption are critical blind spots in AI infrastructure planning Renewable energy integration is essential for sustainable AI infrastructure development India has cost advantages with data center construction at $4-6 million per megawatt versus $12 million in other markets Investment and infrastructure development require collaborative approaches
All speakers acknowledged that robust infrastructure including energy, cooling, connectivity, and data centers is fundamental to successful AI implementation, with emphasis on sustainability and cost-effectiveness
Trust must be built into AI systems from the beginning
Speakers: Dipali Khanna, Kip Wainscott, Son Sokeng, Moderator
Trust must be designed from day one, not retrofitted after deployment Financial services demonstrate that existing trust architectures enable successful AI adoption Human-centric governance frameworks should be aligned with AI risk assessment Trust is now the condition for scale in AI deployment, not a downstream concern
Speakers agreed that trust is fundamental to AI adoption and must be embedded in system design from the outset rather than added as an afterthought
Global South countries should be active participants in AI governance
Speakers: Eugenio Vargas Garcia, Moderator, Mohamed Kinaanath, Son Sokeng
Global South countries must be co-authors of AI governance norms, not passive recipients Global South must be co-authors of AI governance norms, not passive recipients Small island developing states need AI for institutional resilience and survival, not just competitive advantage AI readiness assessments and national strategies are being developed across Global South countries
There was strong consensus that Global South countries should actively shape AI governance frameworks rather than simply adopt solutions designed elsewhere, bringing their unique perspectives and needs to global discussions
Similar Viewpoints
India has significant competitive advantages in AI infrastructure due to low energy costs and manufacturing capabilities, positioning it well for AI leadership
Speakers: Tejpreet S Chopra, Narendra Singh, Vinod Jhawar
Countries with cheapest energy will win the AI arms race India has cost advantages with data center construction at $4-6 million per megawatt versus $12 million in other markets Renewable energy integration is essential for sustainable AI infrastructure development
Small and geographically dispersed nations can leverage AI and digital technologies to overcome physical limitations and serve remote populations effectively
Speakers: Dr. Mahendra Karpan, Mohamed Kinaanath
Telemedicine and remote healthcare delivery can serve dispersed populations effectively Small island developing states need AI for institutional resilience and survival, not just competitive advantage
India’s approach to digital infrastructure and government support creates accessible, affordable models that other developing countries can adopt
Speakers: Parag Khanna, Narendra Singh
India’s Digital Public Infrastructure (DPI) model offers affordable, sovereign solutions for developing countries Government subsidies and support are making AI compute accessible at low costs in India
Successful AI deployment requires strong institutional foundations, collaborative partnerships, and patient capital that understands the complexity of building trusted systems
Speakers: Dipali Khanna, Kip Wainscott
Partnership, patient capital, and institutional strength are key to successful AI implementation Financial services demonstrate that existing trust architectures enable successful AI adoption
Unexpected Consensus
Energy and infrastructure constraints as primary bottlenecks
Speakers: Tejpreet S Chopra, Nihar Shah, Vinod Jhawar, Narendra Singh
The world will need four times more energy in the next 10-12 years to support AI growth, requiring massive investment Energy, cooling, and water consumption are critical blind spots in AI infrastructure planning Renewable energy integration is essential for sustainable AI infrastructure development India has cost advantages with data center construction at $4-6 million per megawatt versus $12 million in other markets
Despite coming from different sectors (industry, research, infrastructure), speakers unexpectedly converged on infrastructure and energy as the primary constraints for AI scaling, rather than focusing primarily on algorithmic or governance challenges
Second-mover advantage for developing countries
Speakers: Parag Khanna, Satvinder Singh, Narendra Singh
Second-mover advantage allows developing countries to benefit from AI without massive infrastructure investment Least developed countries will benefit most from digital connectivity initiatives Government subsidies and support are making AI compute accessible at low costs in India
Unexpectedly, speakers from different backgrounds agreed that developing countries could actually have advantages in AI adoption through leapfrogging strategies and lower costs, challenging the narrative of inevitable technological disadvantage
Space-based solutions as practical near-term options
Speakers: Narendra Singh, Mohamed Kinaanath
Space-based data centers represent future opportunities for critical workload protection Small island developing states need AI for institutional resilience and survival, not just competitive advantage
The convergence on space-based solutions as practical options for both large countries like India and small island states was unexpected, suggesting space technology is becoming more accessible for diverse use cases
Overall Assessment

The discussion revealed strong consensus on human-centric AI development, the critical importance of infrastructure and energy considerations, the need for trust-by-design approaches, and the imperative for Global South leadership in AI governance. Speakers consistently emphasized collaboration over displacement, continuous learning, and the unique opportunities for developing countries to leverage AI for leapfrogging development challenges.

High level of consensus across diverse stakeholders from government, private sector, and international organizations. The agreement spans technical, governance, and development dimensions, suggesting a mature understanding of AI’s multifaceted challenges and opportunities. This consensus provides a strong foundation for coordinated action on AI governance and deployment in the Global South, with particular emphasis on inclusive, sustainable, and human-centered approaches.

Differences
Different Viewpoints
Approach to AI job displacement – government intervention vs. market-driven solutions
Speakers: Satvinder Singh, Narendra Singh
Governments will implement policies to prevent complete replacement of important jobs by AI Government subsidies and support are making AI compute accessible at low costs in India
Singh advocates for government policies to prevent AI job displacement, while Narendra Singh focuses on government support for AI adoption and entrepreneurship without addressing displacement concerns
Infrastructure investment priorities – centralized vs. distributed approaches
Speakers: Vinod Jhawar, Tejpreet S Chopra
Renewable energy integration is essential for sustainable AI infrastructure development Countries with cheapest energy will win the AI arms race
Jhawar emphasizes sustainable renewable energy integration for data centers, while Chopra focuses on cost competitiveness as the primary factor in AI infrastructure success
AI adoption timeline and readiness requirements
Speakers: Son Sokeng, Parag Khanna
Human-centric governance frameworks should be aligned with AI risk assessment Second-mover advantage allows developing countries to benefit from AI without massive infrastructure investment
Sokeng emphasizes the need for comprehensive governance frameworks and human capacity building before AI adoption, while Khanna advocates for immediate adoption using existing cloud-based solutions
Unexpected Differences
Role of government subsidies in AI development
Speakers: Audience, Narendra Singh
AI subsidies could catalyze adoption similar to India’s solar revolution Government subsidies and support are making AI compute accessible at low costs in India
While both support government involvement, the audience member questions whether current subsidy levels are sufficient compared to solar energy support, while Singh defends current government initiatives as adequate
Cost-effectiveness of AI vs. human labor
Speakers: Audience, Narendra Singh
Government AI adoption policies should consider job displacement impacts and cost-effectiveness Entrepreneurial culture must be prioritized in education to help people adapt to AI-driven changes
Unexpected tension emerges where audience highlights AI being more expensive than human labor (7 rupees vs 1 rupee per call), while Singh promotes AI adoption through entrepreneurship without addressing cost concerns
Overall Assessment

The discussion reveals moderate disagreements primarily around implementation approaches rather than fundamental goals. Key tensions exist between immediate AI adoption vs. careful governance development, government intervention vs. market solutions, and infrastructure investment priorities.

Moderate disagreement with significant implications for AI policy coordination. While speakers generally agree on AI’s potential benefits and the need for inclusive development, their different approaches to governance, investment, and implementation could lead to fragmented strategies across the Global South. The disagreements suggest a need for more coordinated policy frameworks that balance rapid adoption with responsible governance.

Partial Agreements
All agree that AI should enhance rather than replace human capabilities, but disagree on implementation – Singh focuses on policy protection, Jhawar on democratization through tools, and Karpan on maintaining human elements in healthcare
Speakers: Satvinder Singh, Vinod Jhawar, Dr. Mahendra Karpan
AI will create collaboration and enhancement rather than displacement of human workers AI tools will democratize opportunities by breaking language barriers and enabling grassroots participation Healthcare will maintain human touch for patient comfort and reassurance despite AI diagnostic capabilities
Both agree that trust is fundamental to AI adoption, but Khanna emphasizes building new trust frameworks from scratch while Wainscott advocates leveraging existing institutional trust mechanisms
Speakers: Dipali Khanna, Kip Wainscott
Trust must be designed from day one, not retrofitted after deployment Financial services demonstrate that existing trust architectures enable successful AI adoption
Both agree on Global South leadership in AI governance, but Garcia emphasizes practical diplomatic engagement while the Moderator focuses on institutional framework development
Speakers: Eugenio Vargas Garcia, Moderator
Global South countries must be co-authors of AI governance norms, not passive recipients Global South must be co-authors of AI governance norms, not passive recipients
Takeaways
Key takeaways
AI will enhance and collaborate with human workers rather than completely replace them, with current impact being greater on white-collar than blue-collar jobs Continuous learning and upskilling will be essential for workforce adaptation to rapid technological change across all sectors The world will need four times more energy in the next 10-12 years to support AI growth, with countries having cheapest energy likely to win the AI arms race Infrastructure challenges including energy, cooling, and water consumption are critical blind spots that need immediate attention India’s Digital Public Infrastructure (DPI) model offers affordable, sovereign AI solutions that can be replicated globally, with over 50 countries already building systems on this stack Small island developing states and Global South countries need AI for institutional resilience and survival, not just competitive advantage Trust must be designed into AI systems from day one rather than retrofitted after deployment Global South countries must be co-authors of AI governance norms rather than passive recipients of frameworks from developed nations Second-mover advantage allows developing countries to benefit from AI through cloud-based solutions without massive infrastructure investment Multi-stakeholder collaboration between government, industry, academia, and startups is essential for successful AI implementation
Resolutions and action items
Countries should start with small-scale, quick-impact AI projects in sectors like education, healthcare, and agriculture before scaling up Governments need to implement policies that promote innovation while protecting people, ensuring AI governance frameworks are enablers rather than obstacles International cooperation and tech diplomacy engagement is essential, particularly for countries with limited resources Investment in renewable energy integration for sustainable AI infrastructure development should be prioritized Development of AI readiness assessments and national AI strategies should continue across Global South countries Focus on human capacity building through education and training programs, with targets like Cambodia’s goal of 100,000 AI-ready talents in 10 years Establishment of regulatory sandboxes and independent safety assessments to support responsible AI adoption Creation of interoperable standards and grievance redress mechanisms for AI systems
Unresolved issues
How to define and balance AI sovereignty while maintaining international cooperation and interoperability The challenge of making AI economically viable when current costs often exceed revenue generation ($2 spent to generate $1) Skill and talent shortages for maintaining and operating AI infrastructure, with some data centers experiencing 30% downtime due to lack of qualified personnel The need for new indigenous AI chips with better performance and lower costs to address current economic challenges How to prevent job displacement in sectors where AI adoption is accelerating faster than workforce retraining Balancing the speed of AI adoption with the need for comprehensive safety and governance frameworks Addressing the digital divide and ensuring AI benefits reach remote and underserved populations Managing the environmental impact of massive AI infrastructure expansion while meeting sustainability goals
Suggested compromises
Governments should restrict AI adoption in certain sectors (like call centers) temporarily while developing retraining programs for affected workers Adoption of collaborative augmentation models rather than full automation to maintain human involvement in critical processes Use of cloud-based and edge computing solutions as alternatives to massive data center investments for resource-constrained countries Implementation of risk-based governance frameworks that align regulatory requirements with the actual risk level of AI applications Phased approach to AI deployment starting with proven use cases before expanding to more complex applications Public-private partnerships to share the costs and risks of AI infrastructure development Regional cooperation frameworks like ASEAN’s DEFA to pool resources and share AI capabilities across multiple countries
Thought Provoking Comments
The world that’s going to win the AI arms war is the country that has the cheapest energy… I really do believe that in India we have an incredible opportunity… My first solar farm eight years ago, my revenue was 18 rupees a kilowatt hour. Today we get 2 rupees 20.
This comment reframes the AI competition from a purely technological race to an economic sustainability challenge, introducing the concept of energy cost as the determining factor for AI dominance. It provides concrete data showing India’s dramatic cost reduction in renewable energy, suggesting a strategic advantage.
This shifted the discussion from abstract AI capabilities to practical infrastructure considerations, leading other panelists to discuss data center costs, cooling challenges, and renewable energy integration. It established energy economics as a central theme for the remainder of the panel.
Speaker: Tejpreet S Chopra
I think the biggest impact is actually more on white-collar jobs rather than blue-collar jobs… it’s really impacting certain segments of the economy… a lot of it has got to do with collaborative augmentation rather than full automation
This insight challenges the common narrative that AI will primarily replace manual labor, instead suggesting that knowledge workers face greater disruption. The distinction between ‘collaborative augmentation’ and ‘full automation’ provides a nuanced framework for understanding AI’s impact on employment.
This comment fundamentally shifted the jobs discussion from fear-based rhetoric to a more analytical assessment. It prompted other panelists to discuss upskilling, continuous learning, and the human elements that cannot be replaced by AI, leading to a more balanced conversation about workforce transformation.
Speaker: Satvinder Singh
For countries like ours, we’re starting out at a severe deficit. There is not a surplus of radiologists… So AI actually comes in to help us with diagnosis, accuracy, speed of diagnosis… But to comfort and reassure those parents, that’s a human function… that cannot and can never be replaced by AI
This comment provides a crucial perspective from developing nations, showing how AI can address critical skill shortages rather than create unemployment. The distinction between diagnostic capabilities and human emotional support offers a practical framework for AI deployment in healthcare.
This grounded the abstract AI discussion in real-world healthcare challenges, demonstrating how the same technology can have different implications in different contexts. It influenced the conversation toward practical applications and the complementary nature of human-AI collaboration.
Speaker: Dr. Mahendra Karpan
Another blind spot with respect to… the huge growth… is going to be cooling. And that’s another blind spot that I think we don’t really pay attention to… we need to really think about this in a holistic sense… the water consumption
This comment introduces critical infrastructure bottlenecks that are often overlooked in AI discussions – cooling and water consumption. It challenges the assumption that energy is the only constraint and introduces environmental sustainability concerns.
This expanded the infrastructure discussion beyond energy to include cooling and water resources, adding complexity to the conversation about AI deployment. It prompted discussions about sustainable data center design and the true environmental cost of AI infrastructure.
Speaker: Nihar Shah
Scale and AI. Today you spend $2 and you generate $1 because half of the 50% goes to the AI chip company… This problem can be only solved through enabling indigenous AI chips which has a better performance and the lower cost
This comment reveals the economic reality of AI deployment – that current costs exceed returns due to chip monopolies. The call for indigenous chip development introduces the concept of technological sovereignty as essential for AI viability.
This comment shifted the discussion from AI applications to fundamental economic viability, highlighting the importance of supply chain independence. It connected AI development to broader themes of technological sovereignty and economic sustainability.
Speaker: Narendra Singh
For ASEAN… we were able to show very quickly through data that the biggest beneficiaries actually of the DEFA is not even advanced economies like Singapore… but actually are the LDCs… because they are going to be more developed
This insight challenges assumptions about who benefits most from digital transformation, suggesting that least developed countries may gain the most from AI and digital connectivity due to their ability to leapfrog legacy systems.
This comment introduced the concept of ‘leapfrogging’ advantage for developing nations, influencing the discussion toward how emerging economies might actually have strategic advantages in AI adoption. It provided a more optimistic framework for Global South AI development.
Speaker: Satvinder Singh
Trust must be designed from day one, not retrofitted after deployment. Transparency, auditability, grievance redress, open architecture are not compliance burdens. They’re adoption accelerators.
This reframes trust and governance mechanisms from obstacles to enablers of AI adoption, challenging the common perception that regulation slows innovation. It provides a strategic framework for building trustworthy AI systems.
This comment fundamentally shifted the governance discussion from viewing regulation as friction to seeing it as infrastructure for scale. It influenced subsequent discussions about building trust mechanisms and the relationship between safety and adoption.
Speaker: Dipali Khanna
Overall Assessment

These key comments transformed what could have been a superficial discussion about AI benefits into a sophisticated analysis of the practical challenges and opportunities for AI deployment in emerging economies. The insights collectively shifted the conversation from theoretical frameworks to operational realities, introducing critical considerations like energy economics, infrastructure bottlenecks, economic viability, and the unique advantages that developing nations might possess. The comments created a more nuanced understanding of AI’s impact, moving beyond simple narratives of job displacement or technological determinism to explore complex interdependencies between technology, economics, governance, and social needs. This elevated the discussion to a strategic level that could inform actual policy and investment decisions.

Follow-up Questions
How can hydrogen fuel cells be implemented at large scale in railways and buses, given the current bottlenecks?
This question addresses the gap between experimental use and large-scale implementation of hydrogen fuel cells in transportation, highlighting infrastructure and cost challenges that need resolution.
Speaker: Harsh Vartan (Audience member)
What are the specific technical barriers and cost challenges for AI pilot companies to achieve full-time impact and scale?
This question focuses on the practical challenges startups face in scaling AI solutions, particularly the cost-revenue imbalance where companies spend $2 to generate $1 due to high AI chip costs.
Speaker: CTO at MindEquity.ai (Audience member)
How can upskilling and reskilling strategies effectively preserve jobs while maintaining human-in-the-loop systems?
This addresses the critical concern about job displacement by AI and seeks practical solutions for workforce transition and human-AI collaboration.
Speaker: Audience member
What specific subsidies and incentives should be provided for AI projects to catalyze adoption similar to the solar revolution in India?
This question draws parallels between renewable energy adoption success and potential AI adoption strategies, seeking policy frameworks to accelerate AI implementation.
Speaker: Audience member
How should governments balance AI adoption with job preservation, particularly in sectors like call centers where AI is more expensive but displaces human workers?
This highlights the policy dilemma where AI adoption may be economically inefficient but socially disruptive, requiring careful government intervention strategies.
Speaker: Narendra Singh
What are the long-term implications of AI on white-collar versus blue-collar jobs, and how should societies prepare for potential full automation?
This addresses the differential impact of AI across job categories and the need for societal preparation for potential widespread automation beyond current collaborative models.
Speaker: Satvinder Singh
How can countries develop indigenous AI chips to reduce costs and dependency on current expensive chip providers?
This focuses on the critical infrastructure challenge of AI chip costs and the need for sovereign AI capabilities to make AI economically viable at scale.
Speaker: Narendra Singh
What are the specific cooling and water consumption requirements for AI data centers, and how can these be addressed sustainably?
This identifies often-overlooked infrastructure challenges beyond energy consumption that are critical for sustainable AI deployment at scale.
Speaker: Nihar Shah
How can edge computing solutions be balanced with cloud-based AI infrastructure to serve different market segments effectively?
This addresses the strategic question of AI deployment architecture, particularly for serving MSMEs and manufacturing sectors that may need on-premise solutions.
Speaker: Tejpreet S Chopra
What specific governance frameworks and ethical guidelines are needed for AI deployment in healthcare, particularly for telemedicine in remote areas?
This addresses the critical need for regulatory frameworks that ensure AI safety and efficacy in healthcare applications, especially in underserved regions.
Speaker: Dr. Mahendra Karpan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Science AI & Innovation_ India–Japan Collaboration Showcase

Science AI & Innovation_ India–Japan Collaboration Showcase

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on “AI for Good,” exploring how artificial intelligence can drive social impact and economic growth in India. The panel featured representatives from T-Hub (Kavikrut), Indus Action (Kritika Sangani), Atal Innovation Mission (Himanshu), and Agilesium Foundation (Rajesh Babu), each sharing perspectives on leveraging AI for societal benefit.


Kritika Sangani discussed Indus Action’s work in making social protection accessible to vulnerable citizens, particularly through their Right to Education (RTE) initiative. Their digital platform has helped 900,000 children access private school admissions under constitutional rights, growing from just 196 students a decade ago. She emphasized using AI for improved targeting through multilingual WhatsApp chatbots and building frontline worker capacity, with the ultimate goal of creating a “United Entitlements Interface” similar to UPI for accessing constitutional rights.


Himanshu from Atal Innovation Mission highlighted the significant disparity between different regions of India in terms of technological advancement and innovation adoption. He shared examples of upcoming state innovation missions in northeastern states, focusing on solving grassroots problems like water quality issues and bamboo market access using AI-powered solutions. The organization aims to create peer-to-peer learning networks and improve public service delivery through AI implementation.


Rajesh Babu from the healthcare sector discussed AI’s transformative potential in medicine, sharing examples of pharmaceutical rep briefing systems and organ matching for transplants. He emphasized that AI, like previous technological revolutions, will create more opportunities than it eliminates, particularly in making healthcare more accessible and efficient.


The panelists collectively agreed that AI serves as a democratizing force, helping bridge divides rather than creating them, especially when designed with equity and social good in mind.


Keypoints

Major Discussion Points:

AI as a democratizing force for access and equity: The panelists discussed how AI can break down barriers to essential services like healthcare, education, and social welfare by simplifying complex processes, reducing bureaucratic steps, and making services more accessible to vulnerable populations.


Scaling social impact through technology integration: Examples were shared of how AI is being used to scale existing solutions – from Indus Action’s work expanding school admissions from 196 to 900,000 children, to using AI for better targeting of welfare programs and creating a “United Entitlements Interface” similar to UPI.


Regional innovation disparities and AI’s equalizing potential: Discussion of the significant technology gaps between India’s western/southern states versus eastern/northeastern regions, and how AI can help level the playing field by enabling smaller states to leverage data and innovation more effectively.


Practical AI applications solving real-world problems: Concrete examples were presented including multilingual chatbots for welfare access, AI-powered organ matching for transplants, water quality monitoring systems, and presidential briefing-style apps for pharmaceutical representatives.


Addressing digital divides and ensuring inclusive AI deployment: The conversation concluded with concerns about AI potentially deepening existing inequalities, with solutions proposed including building equity algorithms into AI systems, maintaining human oversight, and leveraging smartphones’ widespread adoption to democratize access.


Overall Purpose:

The discussion aimed to explore how artificial intelligence can be leveraged for social good and economic growth in India, with particular focus on practical applications that can improve access to essential services, reduce bureaucratic barriers, and create more equitable outcomes for vulnerable populations.


Overall Tone:

The tone was consistently optimistic and forward-looking throughout the conversation. The panelists demonstrated genuine enthusiasm about AI’s potential for positive social impact, sharing concrete examples and success stories. While they acknowledged challenges like digital divides and regional disparities, the overall sentiment remained hopeful and solution-oriented, with each speaker building on others’ ideas to paint a picture of AI as a transformative tool for social good rather than a threat.


Speakers

Speakers from the provided list:


Kavikrut: Moderator/Host of the panel discussion, appears to be associated with T-Hub (startup incubator/accelerator)


Kritika Sangani: Chief of Staff at Indus Action, works in the development sector for 10 years, former Teach for India fellow, focuses on making social protection accessible to vulnerable citizens through government partnerships


Himanshu AIM: Works at Atal Innovation Mission (AIM), the federal body that manages innovation for India, housed under NITI Aayog (public policy think tank), leads programs including Setting Up State Innovation Mission


Rajesh Babu: Works in the private sector/corporate foundation space, associated with Agilesium (works with pharma and biotech companies), focuses on healthcare AI solutions


Audience Member: Asked questions about medical breakthroughs and awareness of initiatives


Audience Member 2: Asked about which sectors need more startups


Yashi Audience Member 3: Asked about ensuring AI systems for public welfare don’t deepen digital divides


Additional speakers:


None identified beyond those in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive panel discussion on “AI for Good” brought together diverse perspectives from India’s technology and social impact ecosystem, exploring how artificial intelligence can drive meaningful social change while addressing economic growth imperatives. The discussion featured Kavikrut from T-Hub as moderator, alongside Kritika Sangani from Indus Action, Himanshu from Atal Innovation Mission, and Rajesh Babu from Agilesium Foundation, each offering unique insights into leveraging AI for societal benefit across different sectors.


Panelist Backgrounds and Organizational Context

Kritika Sangani brought a decade of experience in the development sector to the discussion, including her background as a former investment banker and Teach for India fellow who has spent 10 years with Indus Action. Her organization now works with 18 state governments and national ministries including Labor, Social Justice and Employment, and Health and Human Services, providing her with ground-level insights into scaling social impact through technology.


Himanshu represented the Atal Innovation Mission, operating under NITI Aayog as Prime Minister Modi’s brainchild launched in 2016. Now celebrating 10 years with the tagline “School to Space,” AIM focuses on building innovation ecosystems across India’s diverse regional landscape, with particular attention to bridging gaps between developed and developing states.


Rajesh Babu from Agilesium Foundation offered the healthcare technology perspective, drawing from practical experience implementing AI solutions in medical settings and pharmaceutical operations, including both successful deployments and expensive failures that provided valuable learning experiences.


Transforming Social Welfare Delivery Through Digital Innovation

Kritika Sangani’s presentation of Indus Action’s work provided compelling evidence of AI’s transformative potential in social welfare delivery. Her organization’s journey with the Right to Education Act’s Section 12.1c – which mandates 25% reservation in private schools for economically disadvantaged children – demonstrates how technology can scale constitutional rights from theoretical provisions to practical reality. The transformation from serving just 196 students in 2013-2014 to reaching 900,000 children today illustrates the exponential impact possible when AI is strategically deployed in social protection systems.


The organization’s development of the RTE Management Information System exemplifies how AI can eliminate bureaucratic friction. By replacing the traditional lottery system – where parents had to physically visit up to 10 schools to determine admission outcomes – with a digital lottery integrated into a streamlined online platform, they reduced what Sangani termed “10 burdensome steps to a single touch process.” This framework of radical simplification became a recurring theme throughout the discussion.


More significantly, Sangani articulated a paradigm shift from citizen-initiated to state-initiated service delivery. Rather than expecting vulnerable populations to navigate complex systems to discover their entitlements, she proposed using AI to enable governments to proactively identify eligible citizens. This “flipping” of the discovery mechanism represents a fundamental reimagining of how social protection systems could operate, using AI and machine learning on existing government datasets including employment records, public distribution system data, and demographic information.


The organization’s current experimentation with multilingual WhatsApp chatbots demonstrates practical implementation of this vision, serving as first points of contact for parents while simultaneously building frontline worker capacity. These AI-powered interfaces address both accessibility and resource constraints that typically plague welfare delivery systems.


Bridging India’s Regional Innovation Disparities

Himanshu provided crucial context about India’s uneven technological landscape, highlighting significant disparities between western and southern states versus northeastern and eastern regions. His candid assessment revealed that while states like Telangana, Karnataka, and Maharashtra engage with advanced technologies including quantum computing and sophisticated AI applications, other regions remain at much more basic levels of technological adoption.


This regional analysis informed AIM’s strategic approach, with concrete examples demonstrating AI’s potential to address region-specific challenges. In one unnamed state, Himanshu shared examples of addressing high iron content in water through AI-powered diagnostics and low-cost solutions, and connecting bamboo producers with global markets through intelligent quality assessment and matching systems.


Particularly striking was his revelation about documented grassroots innovations. In this state, while only 120-130 registered startups existed, over 1,100 validated innovations had been documented, with 3,000 total innovations recorded. When calculated per capita, this innovation density exceeded that of traditionally recognized innovation hubs like Karnataka and Maharashtra.


Himanshu emphasized AI’s potential as an equalizing force, noting that the technology doesn’t discriminate based on geographic location. A district in Sikkim can leverage AI for public service delivery decisions with the same sophistication as Bangalore, provided the underlying data infrastructure exists. He also highlighted India’s linguistic diversity challenge, mentioning the country’s 22 scheduled languages and how AI development by both government and private entities could eliminate language barriers.


Healthcare Innovation Through Practical AI Implementation

Rajesh Babu’s healthcare perspective provided concrete examples of both AI failures and successes, offering valuable lessons about implementation realities. His most instructive example involved a pharmaceutical representative briefing system that initially failed when attempted with earlier technologies like AWS Lex and Polly due to poor accent recognition and limited medical vocabulary understanding. This $1.5-2 million investment became highly successful only when rebuilt with advanced AI capabilities, demonstrating how rapidly evolving AI technology has made previously impossible applications suddenly viable.


The successful system now provides “presidential briefings for pharmaceutical reps,” delivering contextualised, voice-delivered summaries of previous doctor interactions, pending follow-ups, and relevant medical information. This transformation from a frustrating, unusable interface to a seamlessly adopted tool illustrates AI’s maturation from promising concept to practical implementation.


Babu’s broader vision encompasses AI agents operating at hospital, doctor, and patient levels, suggesting a future healthcare ecosystem where intelligent systems handle routine interactions, enabling human medical professionals to focus on complex cases requiring personal attention. His argument positions AI as a democratizing rather than displacing force, drawing historical parallels with previous technological transitions that created more opportunities than they eliminated.


Infrastructure Foundations and Implementation Challenges

The discussion revealed India’s strong foundational infrastructure for AI deployment, with Kavikrut noting the country’s remarkable 22GB average monthly mobile data usage and widespread smartphone adoption. This infrastructure provides the platform for AI democratization – once individuals have smartphones, they effectively have access to AI capabilities.


However, the conversation acknowledged implementation challenges, particularly around ensuring that the most marginalized populations can access AI-enabled services. Audience questions highlighted the persistent gap between developing solutions and ensuring intended beneficiaries know about and can use them, especially for vulnerable populations who may lack digital literacy or reliable internet connectivity.


The panelists revealed different philosophical approaches to addressing digital divides. Sangani emphasized proactive equity measures, including embedding algorithms that ensure balanced gender representation and inclusion of marginalized groups, while maintaining “human in the loop” approaches that keep frontline workers like ASHA and Anganwadi workers integral to service delivery. Conversely, Babu argued that AI is inherently democratizing, flattening traditional hierarchies and making capabilities previously available only to specialists accessible to broader populations.


Sectoral Opportunities and Entrepreneurial Focus

When audience members asked about which sectors need more startup attention, the discussion identified healthcare and education as areas with the greatest potential for AI-driven social impact. Kavikrut observed that redirecting even 10% of talent currently focused on fintech or consumer technology toward healthcare could fundamentally transform the sector, reflecting the significant opportunity cost of current entrepreneurial focus.


Himanshu emphasized that founders should focus on creating value rather than pursuing investment trends, while the examples shared throughout the discussion – from multilingual chatbots for welfare access to AI-powered water quality monitoring and bamboo market connections – demonstrated that impactful applications often address seemingly mundane but fundamentally important challenges.


The Vision of AI as Democratic Infrastructure

Kavikrut introduced the concept of a “United Entitlements Interface” analogous to India’s Unified Payments Interface (UPI), envisioning a future where citizens can seamlessly access constitutional rights through a single, AI-powered platform. This systemic thinking extends beyond individual program improvements to fundamental transformation of citizen-state interactions.


The conversation revealed a shared vision of AI as transformative infrastructure rather than merely another technology tool. Babu’s comparison to electrification – describing AI as an energy that will make everything intelligent – captures this perspective. Just as electricity transformed every aspect of human activity, AI’s integration into existing systems promises similarly comprehensive change.


Conclusion: Practical Optimism Grounded in Real Experience

The panel’s collective vision positions AI not as disruptive technology that replaces existing systems, but as democratic infrastructure that enhances human capabilities while reducing systemic barriers. The emphasis on equity algorithms, human-in-the-loop approaches, and building on existing systems reflects mature understanding of technology implementation in complex social contexts.


The conversation’s optimistic tone, grounded in concrete examples and measurable outcomes, suggests that AI for good is moving beyond aspirational rhetoric toward practical implementation. The transformation from 196 students to 900,000, the successful deployment of healthcare AI systems after expensive failures, and the documentation of thousands of grassroots innovations demonstrate that AI’s democratizing potential is already being realized.


However, the discussion also revealed that achieving AI’s full potential for social good requires intentional design choices, proactive equity measures, and sustained commitment to inclusive implementation. The panelists’ shared conviction that AI represents infrastructure for positive change – contingent on focusing on impact rather than profit maximization – provides a framework for evaluating future initiatives. By maintaining focus on reducing complexity, enabling proactive service delivery, and building solutions that serve millions rather than generating unicorn valuations, AI can indeed serve as infrastructure for a more equitable and efficient society.


Session transcriptComplete transcript of the session
Kavikrut

Yes, you would think it will create access, it will democratize access to healthcare. Yes. Both in terms of price, in terms of availability. Yes. That’s great. Kritika, over to you. Tell us about yourself, the organization, and your take on AI for Good.

Kritika Sangani

Sure. Thanks. Thanks, Kavi. Really happy to be here and privileged to share the stage with each of you. I’m Kritika Sangani, and I work as Chief of Staff at Indus Action. I’ve been in the development sector for about 10 years, started in the corporate sector. Serving investment banks in another lifetime. I’m also a Teach for India fellow. So I joined Teach for India and decided to not look back and continue with this sector. And, in fact, have spent, have been associated with Indus Action for almost 10 years now. Indus Action, as Kavi also alluded to, we work with governments on making social protection accessible to vulnerable citizens. The end goal is that we ensure that. welfare is delivered to them during critical life moments for the household such that they are able to actually tide over moments of crisis and make the most of moments of opportunities like education or healthcare to ensure that they’re trying to codify pathways out of poverty for themselves and their families.

The government is the protagonist in our work because, of course, they are the biggest implementers of social protection. We are the enablers. We come in with tech solutions, policy redesign solutions, capacity building solutions as a team. Our model is to embed these solutions within existing systems instead of creating parallel systems, and that is what we are about. We work with about 18 state governments, actively working with national ministries like Labor and Ministry of Social Justice and Employment. We work with the Department of Health and Human Services, And yeah, very excited to be here. I think when I listened, I heard about the prompt, Kavi, as what AI for good, what does it mean to us really?

It actually is about enabling equitable access for vulnerable citizens to social protection. Right now, they take about 10 steps, 10 burdensome steps to access a single entitlement. How do we bring that down to a single touch process? That is what AI for good stands for us.

Kavikrut

Great. I heard access. You’re talking about simplicity and speed. I’m really curious to find out more themes as we keep talking. Thank you, Kritika. Himanshuji, over to you.

Himanshu AIM

Yeah. Thanks. Thanks, Kavi. And thanks to, I think, the government of India for managing such a humongous event. I know as a matter of fact because we’ve been part of the planning team, almost a year’s preparation has gone into it. So kudos to everybody who’s sort of pitching in their own respective roles. So I’m Imanshu. I come from an organization called Atli Innovation Mission, which is the federal body that manages innovation for the country. We are housed under NITIO, which is the public policy think tank of the government of India. Atli Innovation Mission was the brainchild of the current Prime Minister, Honorable Narendra Modi ji. 2016, when we felt as a country that you need a body for innovation that cuts across all ministries, all life cycles.

So essentially it works from school to startups. And as a startup, even the high -tech startups like space. So we’re going to celebrate 10 years next week. The new title that we have sort of decided for ourselves is School to Space. Right. covering almost everything. I’ll not drain by talking too much about the programs that we do. Maybe we’ll talk it in a bit. I think for me and for the organization as a whole, because you’re also housed within a government institution, so social good means that whatever AI is trying to enable, is it having some impact at the grassroots level? Whether we work with incubators like T -Hub or startups directly or the state government so I also lead a program called Setting Up a State Innovation Mission, for example, where the idea itself is that how do we move beyond the current level of innovation that is happening?

How do we bring every part of the ecosystem together and put a layer of AI? Not only AI, I would say all frontier technologies. For example, when you talk to mature states in the southern and the western part, there’s this huge conversation on quantum. There’s huge conversation around even AI for health, for example. There are conversations where Can we enable AI to improve the public service delivery, for example, right? Where what we say we are not building unicorns, but we are building or we are saying not unicorn in terms of valuation, but in terms of social capital, right? Where we say that can it start impacting a billion lives, right? Yeah. So I’ll pause here.

Kavikrut

No, this is great, Himanshuji. Thank you. We heard access. We heard simplicity and speed. You talked about scale and infrastructure because that’s the mandate with which organizations like AIM come from. I’ll talk about one bit on what we see at T -Hub. We have understood this very clearly that as T -Hub, as organizations, even like AIM, the goal is not for startups to be verticals or horizontals. Startups, MSMEs, nonprofits, I think we’re all here as vehicles to implement all this. Of economic growth. If you talk to startups, they don’t talk about policy. they don’t talk about infrastructure, they’re talking about building when they’re talking about building they just want to solve problems, they want to create value and I think AI has become a supercharged tool for startups to do that so the point I was trying to bring out at least from our experience with startups is that at least in the last 15 years, we see it all as a wave but it is probably the strongest tool that startups have ever had access to and a lot of social impact startups, even if you look at the Swiggy’s of the world and Zomato’s and if you take a step back and if you look at what they’re talking about on social media in the last let’s say a few months, both positively and negatively, you will see the major uproar is about gig economy and I was reading an article on The Economist which said that we are one of the only countries or economies in the world where the gig economy has become a true form of employment right so people are talking about minimum wages they are talking about you know labor treatment they are talking about human rights talking about all of that now who would have thought that a food delivery company right food delivery multiple food delivery and grocery delivery companies will actually create an organized labor market in our country so that’s the lens that we take for startups I will hand it back to you I have a question for some of you based on what you said but I will also drop in one theme here now when I said AI for good I think it obviously means AI for good impact you know for economic growth but it also means AI for good which is the other way to you know interpret the English of saying that it’s here to stay it’s here forever and it is our job to now figuring out what to do with it right and I’ll quote I think Nandan Nilkeniji yesterday said something about you know either it is a race to the top or race to the bottom with AI.

And I think the only way to go to the top is to focus on impact. So, Kritika, I want to pick this up with you. We know that intersection, when you talked about equitable rights, you talked about social protection, you talked about welfare and access to welfare and reducing 10 steps. Tell us a little bit of the work that you’ve done in RTE. I don’t think the audience is aware. Talk about admissions. And then tell us, where do you see the biggest opportunity for reducing those steps in that example with AI?

Kritika Sangani

Absolutely. Thanks, Kavi, for setting that up. So, intersection started with the Right to Education Act. There is a specific clause under it. It’s called Section 12 .1c. Most of you would have watched this movie called Hindi Medium or English Medium, both versions. So, it essentially mandates that 25 % seats in private, unedited schools be reserved for children from economically weaker sections and disadvantaged groups. Now, we picked this sliver of a large constitutional mandate, which is Section 12 .1c, and with a fundamental belief in the power of choice for parents from vulnerable sections of the society to put their children in a school of their choice and not whether it’s public or private. And with that spirit, which is also part of the letter and spirit of Section 12 .1c, we started working on this particular right.

So we started our work in Delhi. I think we started with running on -ground awareness campaigns. And very early we realized that we were able to mobilize about 19 ,000 applications in 2013 -2014 in Delhi, of which only 196 students got a call for admissions. And this is a constitutional right. This is their right to get into the school. that was a huge eye opener for us and we realized that only by working with citizens and parents we are not going to be able to solve this problem because I think the government system also needs streamlining and support and they were really trying to also there was willingness to execute but the government is also really constrained for resources tech capacity and so on so we went there and fortunately the government showed willingness to actually experiment with a full online approach Rajasthan government had already done it so some of the peer effect also worked in our favor and that’s when we introduced what we call the RTE MIS that was our first solution it’s now evolved into an education digital public good it’s an open source modular product that we’ve launched for any government to adopt but what that did was what parents were going through was they had to it was it was this particular act it actually works on a lottery mechanism for selection So the draw of lots the school has to do, and the parent has to go to 10 schools that they have applied to, to actually see which child has, if the child has been selected in A, B, C, or D.

We cut that entire physical transaction and we got a digital lottery integrated, which is actually our secret sauce. Yeah. And that particular module has now was now adopted in some shape or form across 18 states that we’ve sort of worked with.

Kavikrut

How many applications in total and children in schools now?

Kritika Sangani

Children in schools now are about 900 ,000, 9 ,000, 9 ,000 children from one 96, 10 years ago, 10 years ago. And states, we actually when we started, we made an exit from our first state. Which is Delhi and then Uttarakhand. Uttarakhand adopted our end to end system. We took about seven to 10 years. Now we’ve shortened that entire cycle to three

Kavikrut

And tell me, Kritika, to prevent, what is AI giving you as a tool to simplify and scale what you just described?

Kritika Sangani

I think what we’ve actually now moved it to the next level by saying we are now focusing on who are these children who are applying to this right? Are they the most vulnerable? So it’s the challenge of discovery that we are using AI to for improved targeting for the state.

Kavikrut

And how are you using targeting, AI in targeting?

Kritika Sangani

So AI in targeting, what we are currently experimenting with is we have a WhatsApp chatbot. It’s a multilingual chatbot, which serves as a first interface for the students or the parents to apply to. And also using frontline, building frontline worker capacity. What that does is it actually reduces the load of the overburdened frontline worker with respect to reaching out to these students as well. and also being able to target the most vulnerable with respect to just having that multilingual advantage, having a physical touch point if I’m confused to actually reach out to somebody to navigate the system and then apply to the…

Kavikrut

So the path from 196 to 9 lakhs was about 10 years, but I’m understanding that from 900 ,000 to 9 million will be much shorter. It wouldn’t be 10 years.

Kritika Sangani

Absolutely. So there are 20 lakh seats available under this particular right.

Kavikrut

Every year?

Kritika Sangani

Every year.

Kavikrut

Annually 20 lakh children?

Kritika Sangani

Annually there are 20 lakh children. Currently there is about 60 % coverage. When we started it was at the 30 % mark. So we definitely…

Kavikrut

So AI will scale that. And I think you didn’t talk about this, but I know from our other conversations that while AI will help you drive this deeper and scale, what you’re also doing is horizontally the DPG, the Digital Public Good they have built in education, the stack that they have built is now being universally applied to other entitlements or constitutional rights. So we heard the example of education. Absolutely. The dream, I know it’s called project name is UEI. They want to build a UPI like so instead of United Payments Interface, Intersection wants to build a United Entitlements Interface. Think of a DigiLocker meets a DigiYatra, meets the constitution. You log in, you check your own eligibility and you can tap into a constitutional right and apply for it and actually get access to something that is already rightfully yours.

Absolutely. This is phenomenal. Thank you so much for that, Kritika. Himanshuji will take a step back. I love depth, especially when people who are in action talk about it. Tell me where are you seeing, pick a specific example and where are you seeing AI truly unfold the real impact of what the work that you guys do. I have the same question for Rajesh Babuji after this.

Himanshu AIM

Okay. I’ll probably step maybe two steps even further back, right? When the cabinet approved our extension in 2024, one of the programs that was really high on priority was setting up a state innovation mission, right? We did a couple of workshops and we realized that there’s a huge disparity between the western and the southern parts at one end of the spectrum and the northeast, eastern and northern part of the country, right? So Telangana, Andhra, Karnataka, Maharashtra are already talking in a level which is more towards the evolved countries as a whole, right? Let’s not say that these are parts of India in terms of technology enablement and even understanding of how this can be leveraged by a startup.

And when you take a diametrically opposite view to even the startups in northeast and even eastern part of the country, right? They are really, really not even at the basic level, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right?

Even in the eastern part of the country, right? new to all of these states. So one of the first few state innovation missions that we plan to set up is in the northeast, right? And the eastern part of the country. The first launch will be next week. I don’t want to take the thunder away. I know what’s coming up. We’re talking to them about it. You’re launching a state innovation mission next week. I won’t name the state. Yes, correct. And some of the conversation that we’re having with the state, right? And it’s a very progressive, evolved state. Not too much in terms of manpower or resources because of the simple reason that all the good people migrate to Hyderabad and Bangalore of the country and don’t remain in that state, right?

But those who have remained back, right? They’re actually building something really incredible for the state itself, right? They’re trying to solve some grassroot problems. I’ll talk something that is more relevant. And this is a conversation that happened about two months back when almost everything was set and the idea was how do we sort of launch a hackathon which is a ministry -backed hackathon. Now the state of, the state, let me not name the state, the water in the state has high content of iron, right? The idea was can we convert this into a hackathon and say that leverage AI, right, to first identify the content of water in different parts of even the district. So it’s not talking at the state level, going one level below district and even one level below district to a sub -district level.

Identify what is the different level of iron or different types of iron and then build a hackathon where a startup can leverage this data that has got collected, right? And build solutions two ways. One, a low cost diagnostic, right? Can we build a 100 rupee kit that can sort of diagnose this water, right? And second, maybe build a low cost solution to solve this particular problem. Other conversation that started happening. Now the state has a lot of bamboo, right? In fact, any of you. You would have been to the newer terminal at Bangalore airport. If you see all those beautiful bamboo things happening, it’s come from the state, right? The quality of bamboo is better than China and Vietnam, but they only get about one -tenth or one -twentieth the price, right?

One, because they don’t have access to market. Two, they are not able to even identify what bamboo will sell where at what price, right? Can we link it to the global market, identify who needs what? So they are trying to create a dashboard for it, right? And that thought came because when they realized that people have appreciated the bamboo that has gone into the terminal 2 of Bangalore airport, there could be a lot of use cases, right? Third element, and this was fascinating, right? And this is something which is sort of not something to be proud of in government. We have a lot of data, right? Everybody knows we have a lot of data in the government, right?

You are just trying to build a small funnel, right? That what’s the total? What’s the total number of innovations? who convert to a startup and create the jobs, right? Now, from startup to job is a slightly easier sort of, maybe multiplied by 10, 15, or even you have the data for these 100, 200 startups that are there in this district -registered DPIT startup. The first thought that came to my mind while I was talking to the secretary there, he said, let’s look at innovations that are happening in the state. Now, imagine the state has 120, 130 startups, but has 1 ,100 documented innovations which are grassroot innovations, right? And these are validated innovations because the overall innovation is 3 ,000, which is better than a lot of states when we took it in terms of per capita than even Karnataka and Maharashtra.

I come from a consulting background. The first thing I did was just divide it by the population, right? Look at a per capita thing. It’s twice the number for Maharashtra, right? And they are trying to solve problems which are very basic. For example, they are saying that, hey, if we mix this plant and this plant, that can lower your… blood sugar. I don’t know. Some of them may be patentable. Some of them may not be patentable. But the idea is that even if 10 % is good enough, right? So 10 % of that 1000 is about 100. It is just double the number of startups. Yeah. Where the innovation is already existing. Right? Somebody has to commercialize that innovation. Right? Right.

And this number sits on the National Innovation Foundation dashboard. Right? Which is, which was, and the state government had no clue about it. Yeah. That, oh, these many innovations. And then

Kavikrut

And a lot of this you can now supercharge with AI. Yes. Which was slower before. And now we have multiple, let me just say, infrastructure layers, right? You have everything from 5G smartphone usage to the largest user base for chat GPT to, you know, I don’t know if you know. This number, but 22GB is the average usage of 5G internet. mobile data in India, 22 GB per month.

Himanshu AIM

Correct. And the other thing that we are talking to this, how can we enable AI to improve the public service delivery when I was talking about that, right? And there are some broad use cases, right? Simple cameras on the traffic lights which monitors what is the flow of traffic at each hour or each minute, right? And LinkIt can be sort of automate that to reduce the consumption of the petrol, diesel, whatever it is, right? And then show some savings and fairly earn carbon credits. One very simple use case. Can we utilize, for example, satellite imagery to identify how lakes are drying up, right? Or what is the water? Just basic imagery. Everything is available. It’s not that we have to create newer thing.

And this is available with the government, right? One of the startups that we have funded at Artle Innovation Mission is a startup that has created small sensors that are embedded in the pipeline that tell you where, and measuring the flow of water at each level, maybe at every 10, 20, 30 feet, that where exactly is the leakage. So that people don’t have to go to that pipe and then see, oh, where is it going, let me just fix it up. You directly go to that. The next level that the startup is building is, can we also look at where the leakage is happening for the multiple times and look at what is the material, right? Is there some more stress at that particular point of the pipeline to ensure that next time that you’re building or repairing it, the leakage doesn’t come from this particular part of the pipeline.

Kavikrut

No, this is great, Himanshuji. What I’m hearing from you is that, first of all, I can feel your sense of energy and excitement for the region. Upcoming state innovation mission launch as well as the beautiful vest you’re wearing today. This is not. This is not the state that. Yeah, we’re not giving any hints. And I was just going to say that what you’re saying is in like the pipe. example that you talked about is that there are existing innovations, there are existing latent energy that startups have, there are real problems that can be solved and AI is helping, you know, as an example, tie all of this together. It’s not just supercharging or fueling this, but it’s the glue that’s bringing all this together.

And the best part is that in truest sense, this is democratization because AI will enable that Assam and Manipur or Meghalaya or Sikkim are at the same level as maybe Karnataka because if you have that AI level on a data, right, and you want to take any data back decision for public good, right, it doesn’t stop Sikkim to take that decision which Karnataka can take today.

Himanshu AIM

And the other very important thing that we are trying to build within this state innovation mission is to create a peer -to -peer learning network, right. There could be something fascinating that Sikkim can offer in terms of organic vegetation or agriculture which can be picked up from the other agrarian states. Yeah.

Kavikrut

Yeah. This is great. I’ll, we’ll go over to Rajiv. Rajiv Babuji and then we’ll break for questions. You know, we’ve heard the building in public perspective from a non -profit as well as a federal organization. I want to see the private slash corporate, you know, and the work you guys do in the foundation perspective, Rajababuji. You already spoke about, you know, access. You spoke about availability in healthcare. Pick an example that you’re already building towards and tell us what are you most excited about, about how the products and services that you are building will truly unlock value for social good and economic growth.

Rajesh Babu

Always. Yes. So I’ll first touch upon one point. See, there is a lot of concern and fear about AI coming and taking away the jobs, right? I think that’s partially true, but in the big picture, it is not true. See, what’s going to happen every time technology comes, for example, when humanity started, as a civilization we started, there was only one industry, which was agriculture. Yeah Pre -industrialization Exactly And then after that When industrialization happened More opened up It could have been seen Then with the power it came It saw very disruptive From horse power to steam power That could have been seen very disruptive To many people who Livelihood was depending on horse power Or bull power or whatever power it was Same way from steam When it moved to electricity It could have been very disruptive People whose business was all in steam Coal and all that stuff They would have thought Oh this is not necessarily a good thing It’s going to put all of us And even yesterday somebody was referring In early late 1800 They said we’ll close the patent office Because everything needs to be Innovated has been already innovated There is not much to do But then here we are Almost 130 -140 years later Of making the patent And we are still in the process Of making the patent And we are still in the process And we are still in the process Of making the patent And we are still in the process new things are unfolding.

So I think, you know, like in the, you know, many in the forum leaders spoke, A is like another energy, right, like electrification. It’s going to do a big transformative change, and there is not a thing it won’t touch. It will touch everything. And it’s going to make everything intelligent. And when everything becomes intelligent, largely, very good things will happen. Very good things, very positive things. It’s going to enhance the livelihood or quality of life for many people, and it’s going to create a lot more opportunities. With that being said, in the industry we are focused in, right, for example, I will tell one example which we tried, Agilesium. One of the things we wanted to do is that, so we work with pharma company, biotech company.

One of the problems we were trying to solve I will just give few problems Which we were trying to solve Was not unsolvable At that time because of the technology As recent as 4 -5 years

Kavikrut

Tell us more about this example

Rajesh Babu

For example in 2018 We invested in a project Our customers were Pharma companies It was reps So the reps basically are going And meeting the doctors They are supposed to know which doctor They are meeting What was the last time the conversation They had with the rep And doctor said I wanted to know About this medicine What kind of adverse impact it could have For the patients Side effects it may have if I give it to the patients Some technical details the doctor would have requested So they have to go back And maybe talk about it Give those details and all that stuff But between two appointments Sometimes it’s three months to six months So they don’t remember what happened So what we wanted is Last time conversation recorded Of course they have it in paper or sales force And all that stuff They may put it But then you are going today from the previous meeting You don’t remember that You are not accessible You are in the car driving and all that stuff So what we wanted to do is We wanted to do an AI Where in the phone app it’s implemented And you Previous day Based on it will look at your calendar It will look at who are all the doctors you are visiting And then it will go to the CRM What was the conversation Last two three conversations happened with the doctor You take that And what homework you did on that It will take all that Then it will take as a voice memo Morning 7am when you are going to meet the doctors You play that voice memo It says hey today you are meeting so and so doctors This is what was the conversation last three This is what you are supposed to tell It’s

Kavikrut

almost like a morning presidential briefing Exactly,

Rajesh Babu

morning presidential briefing And I got this idea based from that only Amazing then at that time The technology available for me Was Lex Lex is a Offering tool AWS had released from Alexa So Alexa, whatever they used to do Build Alexa They gave that tool as Alexa And then there was another technology called Poly, which also from AWS Technology So we took these two technologies For our customers from the AWS platform We were trying to build it It sucked At that time The experience was really bad Because They could not understand the keywords The medical keyword It would not understand People’s accent it would not understand And reading structured and unstructured data Because you have to read the unstructured data Structured data and all that stuff And you have to do it And you have to do it It was not so great Plunky now, so we have to shelve that project.

We invested almost a million and a half, two million dollars at that time, multiple customers of ours. And the experience when we took it to the rep, it was not so great. They did not If it’s not easy and usable, nobody adapts. It would not understand the question. Can you tell, repeat it again, repeat it again, three times, they would throw the phone. Right? It was like that. And when it got it also, it did not understand. It was bad. Because of the accent, various things, medicine, technical. Now, that has been implemented a year back with the AI super seamless.

Kavikrut

And where do you see the impact of adoption of this technology?

Rajesh Babu

Across the board, everywhere. Because everywhere if you think about it now, right, it can take every individual, every one of our individual, let’s say our calendar we can take. We can create a personal agent for it. and it will look at the task, it will look at our email, it will look at our calendar and it will tell this is what you are supposed to do. I

Kavikrut

think a lot of people have begun to use that. Yes,

Rajesh Babu

and then now not only self, an organization can create an agent for each and every buddy. Instead of manager going and telling each and every person, hey, you are supposed to complete this, we can have their personal agent who is their agent buddy can tell them, which is a little bit private and more comfortable to hear. And

Kavikrut

I think in healthcare and the service that you are in, a continuous flow of information, high quality information, makes a huge difference in availability of the work that you do. Exactly,

Rajesh Babu

this is simple as accessibility information like you said. And scale. But then I will take another complex situation where we are working with another research institute in San Diego. Yeah. Where… every individual, every one of our individual, let’s say our calendar we can take, we can create a personal agent for us. And it will look at the task, it will look at our email, it will look at our calendar, and it will tell, this is what you are supposed to do. I think a lot of people have begun to use that.

Kavikrut

Yes.

Rajesh Babu

And then, now, not only self, an organization can create an agent for each and everybody. Instead of manager going and telling each and every person, hey, you are supposed to complete this, we can have their personal agent, who is their agent buddy, can tell them, which is a little bit private and more comfortable to hear.

Kavikrut

And I think in healthcare and the service that you are in, a continuous flow of information, high quality information, makes a huge difference in availability of the work that you do.

Rajesh Babu

Exactly.

Kavikrut

Which is access to medicine. This is simple as accessibility, information like you said. And scale.

Rajesh Babu

But then, I will take another complex situation where we are working with another research institute in San Diego.

Kavikrut

Yeah.

Rajesh Babu

Where basically, this is in the liver transplant. Where the patient basically is waiting there Who is in the waiting list For a longer time The donor comes in, they would be matched Now what we are doing is That is not the way Because you know in the liver transplant Any organ transplant Absorption of that organ is very difficult Because it is seen as a foreign body

Kavikrut

Yes, the foreign body

Rajesh Babu

And the body will Try to reject it Making it accept is a very very big problem So then You need to look at the biologics Of both the patients You see who is most conducive From a biological point of view To receive this organ So there are parameters, biological parameters And physiological parameters that now you can do a match And there are too many parameters to do In a simple Old algorithm It is very difficult Now what we do, and then there is multiple research papers

Kavikrut

How far along are you on this matching For organs

Rajesh Babu

Now we are helping Institute One of the top institute in San Diego Scripps And lot of Nobel laureates from medical are there And we are helping One of the researchers actually from India It’s published so I can quote His name is Sunil Kurian He has published and he has implemented this with this and with the AI Now he has told based on the patients, donor and the patient The AI can tell who is the best recipient for this match And their livelihood, the organ, after that they will have a better living condition It can predict based on various parameters which is not very easy to do So from a simple use case as a

Kavikrut

No it’s a phenomenal To very deep science You went from morning presidential briefings to almost Tinder for organs in one shot No this is great I want to ensure Rajeshji we have enough time for questions We have a small audience here I wish we could take live questions online too But we don’t have that facility yet But any questions here, small audience Any questions that we have Very happy to take them You can point it to a certain panelist if you like Yes please we will pass the microphones right behind you please tell us your name and you can point a question to any of us thank you very much

Audience Member

I have two questions one is for Rajesh ji and one is for Kritika ji so Rajesh ji what medical breakthrough do you believe will come up in the market in next 3 -4 years after this integration of medical science and AI is my question crystal ball question on healthcare what is the breakthrough on the healthcare medical side that you see it’s a crystal ball question that might happen in AI that you are excited about second question for Kritika your initiative is very impressive what I want to know is the average poor person somewhere in the rural area is he aware of your initiatives or if not then in what time frame do you believe that you will be able to reach that awareness question

Kavikrut

I will add a flavor to Raj’s question of saying can that awareness be unlocked for further media We will first start with Rajababuji Please go ahead

Rajesh Babu

Thank you, very good question, appreciated See, I think definitely Lots Lots, right, it is going to be very Transformative, it is going to be Both in the healthcare side First the healthcare, right, I think the Primary healthcare Lot of it will move to AI, right, I think Through our AI Basically doctors will Start creating the doctor agent And first your Questions, your personal doctor agent will Address it, basic, and then Based on that, your personal doctor agent Your doctor’s agent Will now next contact the doctor If there is a need

Kavikrut

I have a feeling that the doctor’s agent will talk to your patient’s agent

Rajesh Babu

Yes, yes

Kavikrut

Before it comes to the two humans

Rajesh Babu

Exactly

Kavikrut

So what is the breakthrough for you in that

Rajesh Babu

See, that Basically the access, no waiting Especially, you know, in the western countries, the waiting, some of these specialists is unbelievable, we would not have heard in India, they wait for six months to one year.

Kavikrut

Right, that’s the waiting time for surgeries too.

Rajesh Babu

Yes, so I think that will definitely transform and then also you said like, talked about one of the things, sir talked about the bamboo situation, why not a global healthcare? Like why can’t there be a marketplace immediately, they can reach and there is a doctor, of course already that is happening but that can happen in a much much greater level because these agents of AI could be sitting at hospital level, doctor level, patient level, as a patient, I may not be AI savvy to reach various things but my AI agent avatar which I have created, if we can create in some easy way that would reach it and definitely address it.

Kavikrut

No, I think you brought up a very important point and we will go to Kritika right after is that, what you have said is that in highly skilled professions, where human capacity is at a very significant scarcity that capacity can be unlocked to a further level simply because agentic will come in. No that’s great I hope Yuvrajji the first question is answered I just add on to what you are saying right I think we were doing a joint program with Swissnext which is the Swiss arm of the embassy of Switzerland here and this was like a exchange kind of a program right and one of the Indian startups that actually spent a week in Switzerland what they are trying to create is look at all the profile of cancer patients right and start mapping that to the DNA and start predicting what is the probability right so they have created a test it seems still at a validation state what is the probability of you getting cancer looking at all the data patterns that they have in the database right and they feel that this will sort of enable a much faster cure for cancer because you’ll be able to predict it much better.

Yeah. Krithika, over to you. I’ll expand Ugra’s question again. I’ll recap. I think he’s asked something very interesting and important is can in a country of a billion plus people despite the availability of technology awareness of important, interesting impactful things continues to be a challenge. Given your experience at Indus Action, do you feel there is an awareness problem based on the work you’ve done with now 9 lakh students and can that awareness challenge be leapfrogged using AI? Over to you.

Kritika Sangani

Absolutely. Absolutely. No, this is a really relevant question and something that we are very actively working on. So what we’re saying is we want to flip this question. Whoever the citizen is, citizen discover. That the citizen has to discover the scheme. Can we use AI? And tech to flip this and say can the state discover the citizen? and building in those layers of AI ML into existing data which is exhaustive and it is with the state. To say that what if I layer your VBG Ramji or erstwhile M .G. Narega data with PDS repository with say aspirational district data and say in this district almost 95 % citizens are eligible for this particular entitlement. And this currently happens for demographic longitudinal research.

Correct. Absolutely. It is not used for actual access. Absolutely. And actually there is an organization. So Educate Girls has a model wherein they have been able to use ML to be able to identify and pinpoint to say these are the districts which need an intervention. These are the districts where we have the most number of out of school girls. So awareness will flip to actual targeted outreach. That is exactly what we are now trying to attempt with the state. The problems of validation, verification of citizens, are they eligible, not. Can we use? Yes. Can we use AI? Yes. ML, tech to be able to flip this and say, let tech enable the stage to discover the fascinating.

And that will also reduce the 10 steps in one, in one single shot.

Kavikrut

What an answer. Love this. I think we can take one more question. We have a few minutes left. We have to, we’ll take to keep your question short and pointed to, you know, uh, whoever you want to ask, please go ahead.

Audience Member 2

Thank you for a valuable session. Uh, I just wanted to ask that, uh, there are a lot of startups in India, but in which sector you think there should be more startups, but, uh, but aren’t currently.

Kavikrut

Great question. Great question. Do you want to answer it? You and I can pick this up. You go first and I’ll follow which sector lose largely defined, loosely defined. Do you think there’s a huge volume opportunity for founders?

Himanshu AIM

Okay. Uh, I think that it is a two level. One is if I define sector by, uh, what we traditionally call a sector, let’s say ed tech or a health tech, uh, or the second would be. Is it like a frontier AI, non AI kind of a thing, right? Uh, So, if you look at the current sectors, right, what most startups are doing, that’s my personal belief, not the organizational belief, right? What most startups are doing is they’re trying to, or they were trying to chase venture capital money. So, if I felt that EdTech is hot today, let me go EdTech. Today, that same thing is for deep tech, right? Just enable some element of AI or some element of deep tech and say, I am a deep tech startup, right?

The point is not that. So, I think one, founders should look at some problem to solve. I’m not saying a social problem. Some problem, some gap that they feel. With the current ecosystem or the current startups are not fulfilling. Or create your own niche. That’s one. Second, what some of our programs also do, and I did talk briefly about it, is the social kind of a thing, right? Do we create solutions that can sort of enable or solve for problems in India? Because a lot of problems that we solve for in India or in Indian context. Are replicable to the global south. Because similar per capita GDP, similar environment, similar diversity, similar price point.

Kavikrut

you want to know that is great very very interesting perspective it’s a fantastic question i’ll just answer it with one simple perspective foundership is a long journey even if you look at the current ipos they were all been building for 10 -15 years ai will not shorten that journey but will make impact faster i believe so i think given the world that we are in right now and this massive phenomenal super tool that we are getting i think founders should build things that take time because that can be accelerated now these are essential problems if i have to pick sectors i would say healthcare and education and not because we have them here but i think even if 10 percent of the talent that’s currently in fintech or is currently in consumer tech i think if they move to healthcare it could be engineers founders investors it will fundamentally change this you know what this country can do and needs and it’s the same for education i don’t want to say unemployment but we have a massive massive young talent pool the largest in the world if we can change you know as founders and as startups we can build more tools for education i think we will unlock a superpower you know for our country so i think that’s where founders should go thank you one last question you had one to ask us

Yashi Audience Member 3

hello i’m yashi i wanted to ask one question anybody from the panelists can answer my question is how can we ensure that ai systems deployed for public welfare do not deepen digital divides especially for rural and marginalized communities

Kavikrut

what an amazing question ai can deepen divides i will use the i’ll take the generosity to not just say digital it could be economic social uh you know cultural there are many divides that exist in the country how can it ensure uh kritika do you want to take a shot at it given that you’re in the welfare space i’ll ask maybe all of us can give a quick one line 10 second rapid fire answer on what can we do ahead proactively for the reducing a potential digital divide thanks to ai

Kritika Sangani

yeah i think uh two perspectives uh One is in our solutioning, when we, and I’m going to take a live example, when we actually embed, say, a lottery algorithm within a state, what we tend to do is we also have this equity algorithm to say that the gender ratios are going to be balanced. It’s going to be the girl and the boy are going to have a 50 -50 application rate there, right? Or to say that this is going to be the percentage of, you know, children with special needs or children from, say, socioeconomically weaker backgrounds or like SESD, OBC. So one is, like, how do we proactively embed these algorithms to be able to address what you’ve just suggested, which is also a problem that we actively work on.

I think other is I definitely feel. I feel that we cannot discard the human in the loop. I feel like AI has to make. their jobs easier, give them my Anganwadi worker or my Asha worker. They need to have access just the way Rajbabuji was referring. They have information which is so simple and so easy to disseminate that they don’t have to spend hours and hours parsing through these complex eligibility criteria for welfare schemes and so on. Those are the two perspectives that I have.

Kavikrut

Fantastic. You are basically saying build a pro -social equity bias into AI.

Kritika Sangani

Absolutely.

Kavikrut

Rajbabuji, do you have a quick rapid fire answer to this?

Rajesh Babu

Thank you. I want to also answer the previous question because that was a very good question. I will quickly touch on that and come to this. See, I think like Kavi told, don’t chase the VC money. I mean, it should be always see the monetization should follow the value. What you should be focusing is Are you creating a value And what is happening is Many are chasing the monetization And missing the value Are you making a difference If you are making a difference And wherever it is, whatever small it is Like he said, health tech Especially a lot of potential is there We need to be making disruption there So focus on that But there are others also On digital divide and social equity On the digital divide I think AI with the phones already Everybody has a phone now Almost everybody is going to have a smart phone If not, already they have it Once they have a smart phone, they have AI So AI is not going to divide If anything, it’s flattening everybody out For example, the people with computer engineering They were on the pedestal Programmers Now it has brought innovation Not only for a programming person It has brought down to everything If anything, AI is the biggest equality It is not a divider

Kavikrut

Fantastic Himanshuji, any view on that? What can we do proactively?

Himanshu AIM

Yeah, I think I think sort of agree to what everybody has talked about. I think with AI and the smartphone and we are one of the largest consumers of Internet. Right. It has sort of democratized the divide between rural urban. I think there’s one more divide which we generally don’t talk about is a language divide. Right. There are 22 scheduled languages. No other country in the world has 22 scheduled languages. Right. And this is just scheduled languages. In a state like UP, it looks like Hindi, but you travel from the western part of the eastern part, the dialect changes. Right. So that’s another divide that probably is being sort of democratized by AI, for example. And everybody’s building solutions around it.

Right. All the LLM models that are being developed, both by the government and even some of these larger companies that want to look at our data. Right. So in the future, I think one thing that we need to be very, very cognizant about is which we I started with. Right. The divide between the western. And the southern states at one end of the. spectrum, eastern and the northern states at the other end of the spectrum, right? The challenge is that a lot of mentorship, VC money, all that has not reached it. So they are still trying to build those native solutions, which is not bad, right? But for them to really equalize with some of the other states, that’s where AI is going to enable.

And some of the state governments are building and all of us are playing our own little role in that.

Kavikrut

This is great. You brought it all together Himanshree. We’ll wrap this panel now. Now, I think AI is here for good and it’s our duty to build AI for good. And what an interesting conversation. The divide is a matter of what we do. And it’s great to have all your perspective. Thank you everyone for joining us and for this conversation. Thank you so much. Thank you. Thank you. Thank you. This is the photo of the marathon. That’s why they take it at the start. You can check it out there. Please, thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kritika Sangani
6 arguments154 words per minute1552 words602 seconds
Argument 1
AI should enable equitable access to social protection for vulnerable citizens
EXPLANATION
Kritika argues that AI for good means enabling equitable access to social protection for vulnerable citizens. She emphasizes reducing the burden on citizens by simplifying the process from 10 burdensome steps to access a single entitlement down to a single touch process.
EVIDENCE
Currently vulnerable citizens take about 10 burdensome steps to access a single entitlement, and AI can bring that down to a single touch process
MAJOR DISCUSSION POINT
AI for Social Good and Impact
AGREED WITH
Himanshu AIM, Rajesh Babu
Argument 2
Technology can reduce 10 burdensome steps to access entitlements down to a single touch process
EXPLANATION
Kritika explains how technology, particularly AI, can dramatically simplify the process for vulnerable citizens to access their entitlements. This represents a fundamental shift from complex bureaucratic processes to streamlined digital access.
EVIDENCE
Example of RTE (Right to Education) implementation where they reduced complex application processes through digital lottery systems and online platforms
MAJOR DISCUSSION POINT
Scaling and Democratizing Access Through Technology
AGREED WITH
Himanshu AIM, Rajesh Babu
Argument 3
AI enables states to discover eligible citizens rather than citizens having to discover schemes
EXPLANATION
Kritika proposes flipping the traditional model where citizens must discover available schemes. Instead, she suggests using AI to enable states to proactively identify and reach eligible citizens by analyzing existing government data repositories.
EVIDENCE
Using AI to layer VBG Ramji/MGNREGA data with PDS repository and aspirational district data to identify 95% eligible citizens in districts; example of Educate Girls using ML to identify districts with most out-of-school girls
MAJOR DISCUSSION POINT
Scaling and Democratizing Access Through Technology
AGREED WITH
Himanshu AIM
DISAGREED WITH
Himanshu AIM
Argument 4
Digital lottery system for school admissions scaled from 196 to 900,000 children benefiting
EXPLANATION
Kritika describes the success of implementing a digital lottery system for Right to Education Act admissions. The system evolved from serving only 196 students initially to benefiting 900,000 children across 18 states over 10 years.
EVIDENCE
Started with 19,000 applications in Delhi 2013-2014 with only 196 students getting admission calls; now serves 900,000 children across 18 states; 20 lakh seats available annually with current 60% coverage up from 30%
MAJOR DISCUSSION POINT
Practical AI Applications and Success Stories
Argument 5
WhatsApp multilingual chatbots improve targeting and reduce frontline worker burden
EXPLANATION
Kritika explains how they use WhatsApp chatbots with multilingual capabilities to serve as the first interface for parents applying for educational rights. This reduces the burden on overburdened frontline workers while improving targeting of the most vulnerable populations.
EVIDENCE
WhatsApp chatbot serves as first interface for students/parents to apply and builds frontline worker capacity with multilingual advantage and physical touch points for navigation support
MAJOR DISCUSSION POINT
Practical AI Applications and Success Stories
Argument 6
AI systems should embed equity algorithms to ensure balanced representation and prevent deepening divides
EXPLANATION
Kritika advocates for proactively embedding equity algorithms within AI systems to ensure balanced representation across gender, socioeconomic backgrounds, and special needs. She also emphasizes maintaining human-in-the-loop approaches to support frontline workers.
EVIDENCE
Example of embedding equity algorithms in lottery systems to ensure 50-50 gender ratios and balanced representation of children with special needs, SEBC, OBC backgrounds; making Anganwadi and ASHA workers’ jobs easier with simple information access
MAJOR DISCUSSION POINT
Addressing Digital Divides and Inclusion
DISAGREED WITH
Rajesh Babu
H
Himanshu AIM
9 arguments188 words per minute2364 words751 seconds
Argument 1
AI for good means creating solutions that impact grassroots level and can affect a billion lives
EXPLANATION
Himanshu defines AI for good as solutions that have measurable impact at the grassroots level and can scale to affect a billion lives. He emphasizes building social capital rather than just financial unicorns, focusing on solutions that improve public service delivery across all sectors.
EVIDENCE
Atal Innovation Mission works from ‘school to space’ covering all innovation lifecycles; focus on building unicorns in terms of social capital rather than valuation; working with state governments on AI for health and public service delivery
MAJOR DISCUSSION POINT
AI for Social Good and Impact
Argument 2
State innovation missions can bridge the technology gap between developed and developing regions
EXPLANATION
Himanshu identifies significant disparities between western/southern states and northeastern/eastern regions in AI readiness and proposes state innovation missions as a solution. These missions aim to bring all regions to similar technological capabilities and create peer-to-peer learning networks.
EVIDENCE
Telangana, Andhra, Karnataka, Maharashtra are at evolved country levels while northeast and eastern states are at basic levels; launching first state innovation mission next week in an unnamed progressive state
MAJOR DISCUSSION POINT
Infrastructure and Regional Development
Argument 3
State innovation missions can create peer-to-peer learning networks between regions
EXPLANATION
Himanshu proposes creating networks where different states can learn from each other’s strengths. For example, states with strong organic agriculture can share knowledge with other agrarian states, creating a collaborative innovation ecosystem.
EVIDENCE
Example of Sikkim’s organic vegetation/agriculture expertise that could be shared with other agrarian states through peer-to-peer learning networks
MAJOR DISCUSSION POINT
Infrastructure and Regional Development
Argument 4
There’s significant disparity between western/southern states and northeastern/eastern regions in AI readiness
EXPLANATION
Himanshu highlights the technology gap where southern and western states are operating at levels comparable to evolved countries, while northeastern and eastern states are still at basic levels. This disparity affects startup ecosystems and innovation capacity across regions.
EVIDENCE
Telangana, Andhra, Karnataka, Maharashtra discussing quantum and AI for health at evolved levels; northeast and eastern states not even at basic level; good talent migrates to Hyderabad and Bangalore
MAJOR DISCUSSION POINT
Infrastructure and Regional Development
Argument 5
Existing innovations and data in government systems can be leveraged through AI
EXPLANATION
Himanshu reveals that governments possess extensive data and documented innovations that remain underutilized. AI can help identify, validate, and commercialize these existing innovations, potentially creating more startups and jobs from grassroots innovations.
EVIDENCE
Example state has 1,100 documented grassroot innovations vs 120-130 startups; 3,000 total innovations with twice the per capita rate of Maharashtra; innovations like plant combinations for blood sugar control; data sits on National Innovation Foundation dashboard unknown to state government
MAJOR DISCUSSION POINT
Infrastructure and Regional Development
AGREED WITH
Kritika Sangani
DISAGREED WITH
Kritika Sangani
Argument 6
AI can identify water quality issues and bamboo market opportunities at district level
EXPLANATION
Himanshu provides specific examples of how AI can solve local problems by analyzing district and sub-district level data. This includes identifying iron content in water and building low-cost solutions, as well as connecting local bamboo producers to global markets.
EVIDENCE
Hackathon to identify iron content in water at district/sub-district levels and build 100 rupee diagnostic kits; bamboo quality better than China/Vietnam but getting 1/10th to 1/20th the price due to lack of market access; bamboo used in Bangalore airport terminal appreciated globally
MAJOR DISCUSSION POINT
Practical AI Applications and Success Stories
AGREED WITH
Kritika Sangani, Rajesh Babu
Argument 7
Language barriers across 22 scheduled languages can be addressed through AI solutions
EXPLANATION
Himanshu identifies language diversity as a unique challenge in India with 22 scheduled languages and numerous dialects. He argues that AI is democratizing this language divide by enabling solutions that work across different languages and dialects.
EVIDENCE
22 scheduled languages in India, more than any other country; dialects change across regions within states like UP; LLM models being developed by government and companies to address language barriers
MAJOR DISCUSSION POINT
Addressing Digital Divides and Inclusion
AGREED WITH
Kritika Sangani, Rajesh Babu
Argument 8
Founders should focus on creating value and solving real problems rather than chasing venture capital
EXPLANATION
Himanshu advises founders to identify genuine problems or gaps rather than following venture capital trends. He emphasizes that monetization should follow value creation, and founders should focus on making a real difference regardless of the sector.
EVIDENCE
Observation that startups chase VC money by jumping from EdTech to DeepTech trends; emphasis on solving problems in Indian context that are replicable to global south due to similar GDP, environment, diversity, and price points
MAJOR DISCUSSION POINT
Addressing Digital Divides and Inclusion
AGREED WITH
Kavikrut
Argument 9
AI can predict cancer probability by mapping patient profiles to DNA data
EXPLANATION
Himanshu mentions a joint program with Swiss embassy where an Indian startup is developing AI solutions to predict cancer probability by analyzing patient profiles and DNA mapping. This represents the potential for AI to enable faster cancer treatment through better prediction.
EVIDENCE
Indian startup in Switzerland exchange program creating test to map cancer patient profiles to DNA and predict cancer probability; still in validation state but aims to enable faster cancer cure
MAJOR DISCUSSION POINT
Healthcare and Education Sector Opportunities
R
Rajesh Babu
7 arguments184 words per minute2059 words670 seconds
Argument 1
AI will democratize access to healthcare in terms of both price and availability
EXPLANATION
Rajesh argues that AI will make healthcare more accessible by reducing costs and increasing availability. He believes AI will transform healthcare delivery by making services more widely available to populations that currently lack access.
MAJOR DISCUSSION POINT
AI for Social Good and Impact
Argument 2
AI is like electrification – a transformative energy that will touch everything and make it intelligent
EXPLANATION
Rajesh compares AI to historical technological transformations like the shift from agriculture to industrialization, steam to electricity. He argues that AI will be similarly transformative, touching every aspect of life and creating more opportunities rather than just eliminating jobs.
EVIDENCE
Historical examples of technological transitions: agriculture to industrialization, horse power to steam power, steam to electricity; reference to late 1800s prediction that patent office would close because everything had been invented
MAJOR DISCUSSION POINT
AI for Social Good and Impact
Argument 3
AI democratizes innovation by flattening hierarchies and bringing capabilities to everyone with smartphones
EXPLANATION
Rajesh argues that AI is the biggest equalizer rather than a divider, bringing innovation capabilities to everyone with a smartphone. He contends that AI has democratized programming and innovation beyond just computer engineers to all sectors.
EVIDENCE
Everyone has or will have smartphones with AI access; AI has brought programming capabilities beyond computer engineers to everyone; AI flattens rather than divides
MAJOR DISCUSSION POINT
Scaling and Democratizing Access Through Technology
AGREED WITH
Kritika Sangani, Himanshu AIM
DISAGREED WITH
Kritika Sangani
Argument 4
AI-powered morning briefings for pharmaceutical reps became viable after previous failed attempts
EXPLANATION
Rajesh describes how AI technology finally enabled a solution they had attempted years earlier. The system provides pharmaceutical representatives with voice briefings about upcoming doctor meetings, including conversation history and required information, which was previously impossible due to technology limitations.
EVIDENCE
2018 project with pharma companies failed using AWS Lex and Poly due to poor accent recognition and medical keyword understanding; invested $1.5-2 million before shelving; same solution implemented successfully a year back with improved AI
MAJOR DISCUSSION POINT
Practical AI Applications and Success Stories
AGREED WITH
Kritika Sangani, Himanshu AIM
Argument 5
AI enables better organ matching for transplants by analyzing multiple biological parameters
EXPLANATION
Rajesh describes advanced AI applications in healthcare, specifically organ transplant matching. The AI system analyzes multiple biological and physiological parameters to determine the best recipient match, going beyond simple waiting list priority to optimize transplant success rates.
EVIDENCE
Working with Scripps Institute in San Diego with researcher Sunil Kurian; AI analyzes biological and physiological parameters for liver transplant matching; published research shows AI can predict better recipient matches and post-transplant living conditions
MAJOR DISCUSSION POINT
Practical AI Applications and Success Stories
Argument 6
Primary healthcare will be transformed through AI doctor agents handling initial consultations
EXPLANATION
Rajesh envisions a future where AI doctor agents will handle initial healthcare consultations, with doctor agents and patient agents communicating before involving human doctors. This will address waiting time issues, particularly in Western countries where specialist appointments can take 6 months to a year.
EVIDENCE
Western countries have 6 months to 1 year waiting times for specialists; AI agents at hospital, doctor, and patient levels can create healthcare marketplace; personal AI agents can be created for individuals even if they’re not AI-savvy
MAJOR DISCUSSION POINT
Healthcare and Education Sector Opportunities
Argument 7
Healthcare marketplace can be globalized through AI agents at hospital, doctor, and patient levels
EXPLANATION
Rajesh proposes that AI agents operating at multiple levels (hospitals, doctors, patients) can create a global healthcare marketplace. This would enable patients to access healthcare services globally through their AI agents, even if they are not technically savvy themselves.
EVIDENCE
AI agents can operate at hospital level, doctor level, and patient level; patients can have AI agent avatars created in easy ways to reach various healthcare services globally
MAJOR DISCUSSION POINT
Healthcare and Education Sector Opportunities
K
Kavikrut
2 arguments175 words per minute2461 words840 seconds
Argument 1
India has strong digital infrastructure with 22GB average monthly data usage enabling AI adoption
EXPLANATION
Kavikrut highlights India’s robust digital infrastructure as evidenced by the high data consumption rates. He points to multiple infrastructure layers including 5G, smartphone usage, large ChatGPT user base, and high mobile data consumption as enablers for AI adoption and scaling.
EVIDENCE
22GB average monthly 5G mobile data usage in India; largest user base for ChatGPT; multiple infrastructure layers from 5G to smartphone usage
MAJOR DISCUSSION POINT
Infrastructure and Regional Development
Argument 2
More founders should focus on healthcare and education sectors for maximum impact
EXPLANATION
Kavikrut argues that if even 10% of talent currently in fintech or consumer tech moved to healthcare, it would fundamentally change the country’s capabilities. He emphasizes that with India’s large young talent pool, building more educational tools could unlock a national superpower.
EVIDENCE
India has the largest young talent pool in the world; current concentration of talent in fintech and consumer tech; potential for fundamental change if talent shifts to healthcare and education
MAJOR DISCUSSION POINT
Healthcare and Education Sector Opportunities
AGREED WITH
Himanshu AIM
A
Audience Member
2 arguments172 words per minute129 words44 seconds
Argument 1
Questioned whether poor rural populations are aware of available initiatives and services
EXPLANATION
An audience member raised concerns about awareness gaps, specifically asking whether average poor people in rural areas are aware of the initiatives being discussed and what timeframe might be needed to achieve such awareness.
MAJOR DISCUSSION POINT
Questions About Awareness and Implementation
Argument 2
Asked about expected medical breakthroughs from AI integration in next 3-4 years
EXPLANATION
An audience member posed a forward-looking question about what specific medical breakthroughs could be expected from the integration of AI and medical science in the near term, seeking concrete predictions about healthcare transformation.
MAJOR DISCUSSION POINT
Questions About Awareness and Implementation
A
Audience Member 2
1 argument174 words per minute38 words13 seconds
Argument 1
Inquired about sectors needing more startup focus and innovation
EXPLANATION
An audience member asked the panel to identify which sectors they believe should have more startups but currently don’t, seeking guidance on where entrepreneurial energy should be directed for maximum impact.
MAJOR DISCUSSION POINT
Questions About Awareness and Implementation
Y
Yashi Audience Member 3
1 argument142 words per minute40 words16 seconds
Argument 1
Raised concerns about AI systems potentially deepening digital divides for marginalized communities
EXPLANATION
Yashi asked how to ensure that AI systems deployed for public welfare do not deepen digital divides, particularly for rural and marginalized communities. This question addresses the potential negative consequences of AI implementation on already disadvantaged populations.
MAJOR DISCUSSION POINT
Questions About Awareness and Implementation
Agreements
Agreement Points
AI democratizes access and flattens divides rather than creating them
Speakers: Kritika Sangani, Himanshu AIM, Rajesh Babu
AI should enable equitable access to social protection for vulnerable citizens Language barriers across 22 scheduled languages can be addressed through AI solutions AI democratizes innovation by flattening hierarchies and bringing capabilities to everyone with smartphones
All speakers agree that AI serves as an equalizing force that can reduce existing divides – whether social, economic, linguistic, or technological – by making services and capabilities more accessible to broader populations
Technology should simplify complex processes for citizens
Speakers: Kritika Sangani, Himanshu AIM, Rajesh Babu
Technology can reduce 10 burdensome steps to access entitlements down to a single touch process AI can identify water quality issues and bamboo market opportunities at district level AI-powered morning briefings for pharmaceutical reps became viable after previous failed attempts
All speakers emphasize that technology, particularly AI, should be used to simplify complex processes and make them more user-friendly, whether for accessing government services, solving local problems, or improving professional workflows
AI enables better targeting and discovery of beneficiaries
Speakers: Kritika Sangani, Himanshu AIM
AI enables states to discover eligible citizens rather than citizens having to discover schemes Existing innovations and data in government systems can be leveraged through AI
Both speakers agree that AI can flip the traditional model by enabling proactive identification of beneficiaries and leveraging existing government data to better target services and identify opportunities
Focus should be on creating value and solving real problems rather than chasing trends
Speakers: Himanshu AIM, Kavikrut
Founders should focus on creating value and solving real problems rather than chasing venture capital More founders should focus on healthcare and education sectors for maximum impact
Both speakers emphasize that entrepreneurs and innovators should prioritize solving genuine problems and creating real value, particularly in essential sectors like healthcare and education, rather than following investment trends
Similar Viewpoints
Both speakers view AI as a tool for democratizing access to essential services, with Kritika focusing on social protection and welfare, while Rajesh focuses on healthcare accessibility
Speakers: Kritika Sangani, Rajesh Babu
AI should enable equitable access to social protection for vulnerable citizens AI will democratize access to healthcare in terms of both price and availability
Both speakers acknowledge regional disparities in technology adoption while recognizing India’s overall strong digital infrastructure foundation that can support AI scaling
Speakers: Himanshu AIM, Kavikrut
There’s significant disparity between western/southern states and northeastern/eastern regions in AI readiness India has strong digital infrastructure with 22GB average monthly data usage enabling AI adoption
Both speakers emphasize the importance of designing AI systems that actively address inclusion challenges, whether through equity algorithms or multilingual capabilities
Speakers: Kritika Sangani, Himanshu AIM
AI systems should embed equity algorithms to ensure balanced representation and prevent deepening divides Language barriers across 22 scheduled languages can be addressed through AI solutions
Unexpected Consensus
AI as a democratizing rather than dividing force
Speakers: Kritika Sangani, Himanshu AIM, Rajesh Babu
AI should enable equitable access to social protection for vulnerable citizens Language barriers across 22 scheduled languages can be addressed through AI solutions AI democratizes innovation by flattening hierarchies and bringing capabilities to everyone with smartphones
Despite common concerns about AI creating digital divides, all speakers unanimously view AI as a democratizing force. This consensus is unexpected given widespread global debates about AI potentially widening inequalities
Government data as an underutilized resource for AI applications
Speakers: Kritika Sangani, Himanshu AIM
AI enables states to discover eligible citizens rather than citizens having to discover schemes Existing innovations and data in government systems can be leveraged through AI
Both speakers, despite coming from different organizational backgrounds (NGO and government), agree that government data repositories are vastly underutilized and can be transformed through AI into powerful tools for citizen service delivery
Overall Assessment

The speakers demonstrate strong consensus on AI’s democratizing potential, the need to simplify citizen services, and the importance of focusing on real problem-solving over trend-following. They share optimistic views about AI’s ability to bridge divides and scale impact.

High level of consensus with complementary perspectives from different sectors (government, private, non-profit) reinforcing shared themes of accessibility, simplification, and equitable impact. This strong alignment suggests a mature understanding of AI’s potential for social good and indicates promising conditions for collaborative implementation across sectors.

Differences
Different Viewpoints
Approach to addressing digital divides – technology-first vs human-centered
Speakers: Kritika Sangani, Rajesh Babu
AI systems should embed equity algorithms to ensure balanced representation and prevent deepening divides AI democratizes innovation by flattening hierarchies and bringing capabilities to everyone with smartphones
Kritika emphasizes the need for proactive measures like equity algorithms and human-in-the-loop approaches to prevent AI from deepening divides, while Rajesh argues that AI is inherently democratizing and will naturally flatten inequalities through smartphone access
Discovery mechanism for social services
Speakers: Kritika Sangani, Himanshu AIM
AI enables states to discover eligible citizens rather than citizens having to discover schemes Existing innovations and data in government systems can be leveraged through AI
Kritika advocates for a state-driven discovery model where AI helps governments proactively identify eligible citizens, while Himanshu focuses on leveraging existing government data and innovations to create solutions, representing different approaches to utilizing government resources
Unexpected Differences
Role of human intervention in AI systems
Speakers: Kritika Sangani, Rajesh Babu
AI systems should embed equity algorithms to ensure balanced representation and prevent deepening divides AI democratizes innovation by flattening hierarchies and bringing capabilities to everyone with smartphones
This disagreement is unexpected because both speakers work in social impact domains, yet they have fundamentally different views on whether AI requires active human oversight and intervention (Kritika’s position) or whether it naturally creates equality through access (Rajesh’s position)
Overall Assessment

The discussion revealed subtle but significant disagreements around implementation approaches for AI in social good applications, particularly regarding the need for proactive equity measures versus relying on natural democratization effects of technology

Low to moderate disagreement level – speakers shared common goals of using AI for social good but differed on methodological approaches. The disagreements are more about strategy and implementation rather than fundamental objectives, which suggests room for complementary approaches rather than conflicting paradigms

Partial Agreements
All speakers agree that AI should democratize access and create social impact, but they differ on implementation approaches – Kritika emphasizes systematic equity measures, Himanshu focuses on infrastructure and regional development, while Rajesh believes market forces and technology adoption will naturally achieve democratization
Speakers: Kritika Sangani, Himanshu AIM, Rajesh Babu
AI should enable equitable access to social protection for vulnerable citizens AI for good means creating solutions that impact grassroots level and can affect a billion lives AI will democratize access to healthcare in terms of both price and availability
Both agree that founders should focus on creating real value and impact, but Himanshu emphasizes problem-solving regardless of sector while Kavikrut specifically advocates for talent movement to healthcare and education sectors
Speakers: Himanshu AIM, Kavikrut
Founders should focus on creating value and solving real problems rather than chasing venture capital More founders should focus on healthcare and education sectors for maximum impact
Takeaways
Key takeaways
AI serves as a democratizing force that can provide equitable access to social protection, healthcare, and education by reducing complex multi-step processes to single-touch solutions AI acts as transformative infrastructure similar to electrification, with potential to impact every sector and make systems intelligent rather than just automated Successful AI implementation requires embedding equity algorithms and maintaining human-in-the-loop approaches to prevent deepening existing divides India’s digital infrastructure (22GB average monthly data usage, widespread smartphone adoption) positions it well for AI-driven social impact at scale Regional disparities exist between western/southern states and northeastern/eastern regions in AI readiness, but state innovation missions can bridge these gaps through peer-to-peer learning AI enables governments to proactively discover eligible citizens for welfare schemes rather than requiring citizens to discover available programs Healthcare and education sectors present the greatest opportunities for founders to create meaningful impact using AI tools Language barriers across India’s 22 scheduled languages can be addressed through AI solutions, further democratizing access
Resolutions and action items
Launch of unnamed state innovation mission scheduled for the following week to bridge regional AI readiness gaps Development of WhatsApp multilingual chatbots for improved targeting and reduced frontline worker burden in welfare delivery Implementation of AI-powered organ matching systems in collaboration with Scripps Institute in San Diego Creation of peer-to-peer learning networks between states through innovation missions Building of United Entitlements Interface (UEI) – a UPI-like system for constitutional rights and welfare access
Unresolved issues
How to ensure widespread awareness of AI-enabled welfare initiatives reaches the most marginalized rural populations Specific timeline and methodology for scaling successful pilots from thousands to millions of beneficiaries Detailed framework for embedding equity algorithms in AI systems to prevent bias against vulnerable populations Mechanisms for ensuring AI solutions remain accessible to communities with limited digital literacy Standardization approaches for AI implementations across different state governments with varying technical capabilities Long-term sustainability and maintenance of AI systems in resource-constrained government environments
Suggested compromises
Maintaining human-in-the-loop approaches rather than full automation to ensure frontline workers remain engaged and systems remain accessible Focusing on enhancing existing government systems rather than creating parallel infrastructure to ensure sustainability and adoption Balancing technological advancement with regional readiness by creating state-specific innovation missions rather than uniform national rollouts Combining AI automation with human agents to handle complex cases that require personal interaction and cultural sensitivity
Thought Provoking Comments
Right now, they take about 10 steps, 10 burdensome steps to access a single entitlement. How do we bring that down to a single touch process? That is what AI for good stands for us.
This comment crystallizes a concrete, measurable vision for AI’s social impact – transforming bureaucratic complexity into simplicity. It moves beyond abstract concepts to define a specific metric of success (10 steps to 1) that resonates with anyone who has navigated government services.
This framing became a recurring theme throughout the discussion, with Kavikrut repeatedly referencing the ’10 steps to 1′ concept and the moderator using it to tie together different speakers’ examples. It established a practical framework for evaluating AI initiatives.
Speaker: Kritika Sangani
Can we flip this and say can the state discover the citizen? …building in those layers of AI ML into existing data which is exhaustive and it is with the state…awareness will flip to actual targeted outreach.
This represents a fundamental paradigm shift from citizen-initiated to state-initiated welfare delivery. It challenges the traditional model where citizens must navigate complex systems to access their rights, proposing instead that AI could proactively identify eligible citizens.
This comment reframed the entire discussion about access and equity. It moved the conversation from ‘how do we help people navigate systems better’ to ‘how do we make systems find and serve people automatically,’ representing a more transformative vision of AI’s potential.
Speaker: Kritika Sangani
We are not building unicorns, but we are building or we are saying not unicorn in terms of valuation, but in terms of social capital, right? Where we say that can it start impacting a billion lives, right?
This redefines success metrics for innovation from financial valuation to social impact scale. It challenges the startup ecosystem’s focus on monetary unicorns and proposes ‘social capital unicorns’ as a more meaningful measure for public good initiatives.
This comment shifted the discussion’s value framework, influencing how other panelists discussed their work. It provided a new lens for evaluating AI initiatives and reinforced the ‘AI for good’ theme by establishing impact scale as the primary success metric.
Speaker: Himanshu AIM
So people are talking about minimum wages they are talking about you know labor treatment they are talking about human rights…who would have thought that a food delivery company right food delivery multiple food delivery and grocery delivery companies will actually create an organized labor market in our country
This observation reveals how technology companies can inadvertently become major social and economic forces, creating organized labor markets in unexpected ways. It highlights the unintended but significant social consequences of tech platforms.
This comment broadened the discussion beyond intentional ‘AI for good’ initiatives to include the broader social implications of technology adoption. It added complexity to the conversation by showing how social impact can emerge organically from commercial ventures.
Speaker: Kavikrut
If anything, AI is the biggest equality. It is not a divider…Once they have a smart phone, they have AI. So AI is not going to divide. If anything, it’s flattening everybody out.
This directly challenges the common narrative about AI deepening digital divides, arguing instead that AI democratizes capabilities. It’s a bold counter-narrative that reframes AI as an equalizing rather than dividing force.
This comment sparked a mini-debate about AI’s role in equality vs. inequality, with other panelists building on this perspective. It shifted the final portion of the discussion toward a more optimistic view of AI’s democratizing potential, particularly when combined with widespread smartphone adoption.
Speaker: Rajesh Babu
There’s a huge disparity between the western and the southern parts at one end of the spectrum and the northeast, eastern and northern part of the country…They are really, really not even at the basic level…But those who have remained back, right? They’re actually building something really incredible for the state itself
This honest acknowledgment of regional innovation disparities within India adds nuance to discussions about national AI strategy. It recognizes that ‘AI for good’ must account for vastly different starting points across regions while highlighting untapped potential in less developed areas.
This comment grounded the discussion in India’s geographic and economic realities, leading to concrete examples of how AI could address region-specific challenges like iron content in water and bamboo market access. It made the conversation more practical and locally relevant.
Speaker: Himanshu AIM
Overall Assessment

These key comments fundamentally shaped the discussion by establishing concrete frameworks for measuring AI’s social impact, challenging conventional assumptions about technology’s role in society, and grounding abstract concepts in practical realities. The conversation evolved from general statements about ‘AI for good’ to specific, actionable visions of systemic transformation. Kritika’s ’10 steps to 1′ framework and ‘state discovers citizen’ paradigm provided concrete goals, while Himanshu’s ‘social capital unicorns’ concept redefined success metrics. The discussion gained depth through honest acknowledgment of regional disparities and was enriched by examples showing how commercial technology can create unexpected social benefits. These comments collectively moved the conversation beyond typical AI hype toward a more nuanced understanding of how AI can address systemic challenges in governance, healthcare, and social equity.

Follow-up Questions
How can AI systems deployed for public welfare ensure they do not deepen digital divides, especially for rural and marginalized communities?
This addresses a critical concern about AI potentially exacerbating existing inequalities rather than reducing them, which is fundamental to ensuring AI truly serves social good
Speaker: Yashi (Audience Member 3)
What medical breakthrough will come to market in the next 3-4 years after integration of medical science and AI?
This seeks to understand the practical timeline and specific applications of AI in healthcare that could have immediate impact on patient outcomes
Speaker: Audience Member
Is the average poor person in rural areas aware of Indus Action’s initiatives, and in what timeframe will awareness reach them?
This highlights the critical gap between developing solutions and ensuring the intended beneficiaries actually know about and can access them
Speaker: Audience Member
In which sectors should there be more startups that currently aren’t adequately represented?
This seeks to identify underserved areas where entrepreneurial innovation could have significant impact, particularly in the context of AI for good
Speaker: Audience Member 2
How can AI enable the state to discover eligible citizens rather than requiring citizens to discover schemes?
This represents a fundamental shift in how social welfare systems could operate, using AI for proactive identification rather than reactive application processes
Speaker: Kritika Sangani
How can peer-to-peer learning networks between states be effectively established through AI?
This explores how AI could facilitate knowledge sharing and best practice transfer between different regions with varying levels of development
Speaker: Himanshu AIM
How can AI help predict cancer probability by mapping patient profiles to DNA data?
This represents a specific application of AI in preventive healthcare that could revolutionize early detection and treatment
Speaker: Himanshu AIM
How can AI agents at hospital, doctor, and patient levels create a comprehensive healthcare marketplace?
This explores the potential for AI to create seamless healthcare ecosystems that could dramatically improve access and efficiency
Speaker: Rajesh Babu

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Why science metters in global AI governance

Why science metters in global AI governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the critical role of science in international AI governance, centered around the United Nations’ establishment of an Independent International Scientific Panel on Artificial Intelligence. UN Secretary-General António Guterres opened by emphasizing that effective AI governance requires facts rather than hype, announcing the confirmation of 40 experts to the new scientific panel designed to provide evidence-based analysis for global AI policymaking.


The conversation highlighted the unique challenges of governing AI technology, particularly its rapid pace of development and global reach that transcends national borders. Professor Yoshua Bengio, a leading AI researcher and panel member, discussed the difficulty of achieving scientific consensus on AI risks and benefits, drawing parallels to climate science where uncertainty about catastrophic outcomes still requires policy attention. He emphasized the need for high-level principles that can adapt to rapidly changing technological details.


Microsoft’s Brad Smith stressed the importance of building common understanding before rushing to solutions, arguing that disagreements often stem from lack of shared problem definition rather than fundamental differences. He advocated for using AI to make humans smarter rather than simply creating smarter machines, and praised the UN’s role in fostering international cooperation.


The panel discussion explored practical challenges in the science-policy interface, with experts from India, France, WHO, and Singapore sharing experiences from COVID-19 response, digital public infrastructure deployment, and national AI strategies. Key themes included the need for inclusive governance that represents diverse global perspectives, particularly from the Global South, and the importance of evidence-based policymaking that can adapt quickly to emerging risks and opportunities.


The session concluded with Singapore’s commitment to continued international collaboration, demonstrating how multilateral scientific cooperation can inform more effective and equitable AI governance globally.


Keypoints

Major Discussion Points:

Science-based AI governance framework: The establishment of the UN’s Independent International Scientific Panel on AI to provide evidence-based analysis and create shared understanding across nations, moving beyond philosophical debates to technical coordination and risk assessment.


Bridging the knowledge-policy gap: The challenge of translating rapidly evolving AI research into actionable policy, particularly given the speed of AI development outpacing traditional scientific study and policy-making timelines.


Global cooperation and inclusivity: The need for international coordination to prevent fragmented AI governance, with emphasis on ensuring developing countries and diverse voices (including women, farmers, and marginalized communities) are included in AI policy discussions.


Balancing innovation with safety: The tension between moving quickly to harness AI’s benefits for sustainable development goals while moving carefully to assess and mitigate risks, particularly around employment impacts, social effects, and potential catastrophic outcomes.


Practical implementation challenges: Moving from high-level AI principles (transparency, accountability, fairness) to operational standards, benchmarks, and technical solutions that can work across different regulatory contexts and cultural settings.


Overall Purpose:

The discussion aimed to launch and legitimize the UN’s new science-policy interface for AI governance, specifically the Independent International Scientific Panel on AI. The session sought to establish how scientific evidence can inform global AI policy-making, ensure inclusive participation from all nations and communities, and create frameworks for managing AI’s rapid development while maximizing benefits and minimizing risks.


Overall Tone:

The discussion maintained a consistently serious, collaborative, and optimistic tone throughout. Speakers emphasized urgency while remaining constructive, with a strong focus on multilateral cooperation and evidence-based decision-making. There was notable reverence for the UN’s role and a shared commitment to ensuring AI serves humanity broadly. The tone was professional and diplomatic, befitting a high-level international forum, with speakers building on each other’s points rather than expressing disagreement.


Speakers

Speakers from the provided list:


Anil Ananthaswamy – Moderator/Host, Author of “The Elegant Math Behind Machine Learning”


António Guterres – Secretary General of the United Nations


Yoshua Bengio – Professor, Scientific Director of MILA, AI researcher, member of UN Scientific Advisory Board and International AI Safety Report leadership, appointed to Independent International Scientific Panel on AI


Brad Smith – Vice Chair and President of Microsoft Corporation


Balaraman Ravindran – Professor at IIT Madras, member of International Independent Scientific Panel


Soumya Swaminathan – Former Chief Scientist at WHO (first woman chief scientist)


Ajay Sood – Principal Scientific Advisor to the Government of India


Anne Bouverot – France’s Special Envoy for AI, appointed by President Macron


Amandeep Singh Gill – Undersecretary General and Special Envoy for Digital and Emerging Technologies at the UN, moderator


Josephine Teo – Minister for Digital Development and Information of Singapore


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This high-level discussion at the United Nations represented a pivotal moment in establishing science-based international AI governance, centred around the launch of the Independent International Scientific Panel on Artificial Intelligence. The session brought together global leaders from government, academia, industry, and international organisations to address the fundamental challenge articulated by moderator Amandeep Singh Gill: “We cannot govern what we do not understand.”


The Foundation: Science-Based Governance Over Speculation

Secretary-General Guterres opened with a powerful premise that shaped the entire discussion: AI governance must be grounded in facts rather than speculation, hype, or disinformation. He emphasised that “AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it, let alone govern it.” This challenge is compounded by AI’s borderless nature, making unilateral national approaches insufficient.


The Secretary-General announced the confirmation of 40 experts to the new Independent International Scientific Panel on AI, designed to provide evidence-based analysis that helps countries “move from philosophical debates to technical coordination.” Crucially, he emphasised the need for “meaningful human oversight in every high-stakes decision, injustice, health care, credit” and that “responsibility is never outsourced to an algorithm.”


The panel represents a practical architecture putting science at the centre of international cooperation, with the goal of delivering its first report ahead of the Global Summit and Global Dialogue on AI Governance in July. Guterres positioned this scientific approach not as a brake on progress but as “an accelerator for solutions” that makes progress “safer, fairer, and more widely shared.”


Navigating Uncertainty: The Challenge of Acting Without Complete Knowledge

Professor Yoshua Bengio, one of the world’s most cited AI researchers and a member of the new panel, provided crucial nuance by acknowledging that AI governance faces unique challenges compared to other scientific policy areas. Unlike climate science, where there is greater scientific consensus, AI researchers themselves often disagree about future expectations and interpretations of existing evidence. Bengio drew parallels to climate tipping points, noting that “we don’t have the experience of machines that are really smart and can change society, and be even potentially smarter than us.”


This uncertainty paradox—needing to act on potentially catastrophic risks without complete certainty—emerged as a central theme. Bengio argued that even without proof of specific risks, policymakers must pay attention when potential outcomes could be catastrophic, regardless of probability. He expressed particular concern about “how this will unfold for developing countries in the global south” and emphasised the importance of ensuring “everyone is at the table and no one is on the menu.”


Dr Soumya Swaminathan, former Chief Scientist at WHO, reinforced this perspective by drawing from COVID-19 experience, where “we had to review a couple of hundred publications every day to understand what was happening.” She positioned the new AI panel as similar to the Intergovernmental Panel on Climate Change (IPCC), emphasising the need for rapid evidence processing and adaptive policymaking. Crucially, she highlighted lessons from the pandemic about ensuring recommendations are contextually relevant, noting that some WHO recommendations were applicable in high-income countries but not in low-income settings due to different contexts.


Industry Perspective: Building Common Understanding Before Solutions

Microsoft’s Brad Smith provided a unique industry perspective that strongly endorsed multilateral approaches. Drawing on his view of economic theory about humanity’s tendency to repeat mistakes every 80 years due to generational memory loss, Smith argued that humanity also risks forgetting its great successes—particularly the creation of the UN, which he called “one of humanity’s greatest accomplishments of the 20th century.”


Smith’s most significant contribution was identifying a fundamental flaw in policy discussions: people rush to debate competing solutions without establishing shared understanding of problems. “One of the reasons people so often disagree about the solution is they don’t have a common understanding of the problem,” he observed. He illustrated this point by describing how he used Microsoft Copilot to grade tech industry predictions, finding an average grade of 25%.


He also challenged the prevalent focus on building smarter machines, arguing instead for using AI “to make people smarter, to help us do what we need to do,” emphasising that human capability is “neither fixed nor finite.”


Regional Perspectives and Implementation Challenges

The panel discussion revealed significant regional variations in AI governance approaches and evidence gaps. Professor Balaraman Ravindran from IIT Madras highlighted critical knowledge gaps about AI’s social impacts in India, noting that “all of these stories are coming to us from the west, so what is it that’s happening in India?” He emphasised the need for India-specific research on AI’s effects on children, social fabric, and different cultural contexts.


Professor Ajay Sood, India’s Principal Scientific Advisor, outlined India’s approach through the National AI Governance Framework, emphasising “techno-legal” solutions that embed governance through technical design—an approach also referenced by India’s Prime Minister. Drawing from India’s successful digital public infrastructure experience, this approach aims to integrate governance mechanisms directly into system architecture.


Anne Bouverot, France’s Special Envoy for AI, brought a European perspective emphasising the importance of accurate predictions for appropriate policy responses. She demonstrated how different assessments of AI’s employment impact lead to fundamentally different policy approaches: predictions of job elimination suggest universal basic income, while predictions of job transformation point toward training and reskilling programmes. She also mentioned that Joëlle Barral would serve as France’s nominee to the scientific panel.


Singapore’s Minister Josephine Teo provided a compelling example of how smaller states can contribute meaningfully to global AI governance. Despite having only 6 million people, Singapore has invested significantly in AI R&D and established an AI safety institute. She highlighted Singapore’s role as convener of the Forum of Small States with 108 members, demonstrating that effective AI governance requires diverse participation beyond major powers.


The Challenge of Pace and Inclusivity

A recurring theme was the tension between AI’s rapid development pace and the time required for thorough scientific assessment. Bengio noted that “because it’s moving so fast there’s always going to be a lag between even like the scientific papers” and policy responses, with studies involving people taking “months” while AI capabilities continue advancing.


Swaminathan emphasised that AI governance must include “the voices of women, a low-income woman, a farmer in a remote place” who will use technology very differently from large-scale users in developed countries. This global perspective was reinforced by multiple speakers who stressed that AI’s worldwide effects require globally representative governance structures.


Operationalising Principles: From Consensus to Implementation

While speakers noted substantial convergence on high-level AI principles—transparency, accountability, fairness, and safety—the primary challenge lies in operationalisation. Minister Teo emphasised the need for “standardised evaluation methodologies that work across different regulatory contexts” and capacity building to ensure all countries can meaningfully engage with technical challenges.


Bengio advocated for high-level principles that avoid technical details since “the details are going to change,” while others emphasised adaptive frameworks that can evolve with technological development. The concept of “techno-legal” approaches, embedding governance through technical design, emerged as a promising direction for making principles operational.


Global Cooperation and Concrete Next Steps

Throughout the session, speakers consistently emphasised the UN’s unique legitimacy and inclusiveness for AI governance. Guterres argued that without common baselines, “fragmentation wins, with different regions and different countries operating under incompatible policies and technical standards.”


The session concluded with concrete commitments: Singapore will host the second International Scientific Exchange on AI Safety in May, ASEAN will develop regional AI safety benchmarks, India will continue implementing its governance framework through public-private partnerships, and Microsoft pledged support for UN efforts.


Conclusion: A Framework for Evidence-Based Governance

This discussion established a framework for AI governance that is both scientifically grounded and politically realistic. By positioning the Independent International Scientific Panel as a bridge between evidence and policy, the session created a foundation for moving beyond speculation toward fact-based governance.


The session’s ultimate message was clear: effective AI governance requires transforming uncertainty into understanding through rigorous scientific assessment, ensuring that policy decisions serve humanity’s collective interests. As Secretary-General Guterres concluded, “Less hype, less fear. More facts and evidence. Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals.”


Session transcriptComplete transcript of the session
Anil Ananthaswamy

Today’s session begins from a simple but powerful premise. We cannot govern what we do not understand. It is my honor to open this session with a special address by the Secretary General of the United Nations, whose leadership has placed science and multilateral cooperation at the forefront of global AI governance. So please join me in welcoming His Excellency Antonio Guterres.

António Guterres

Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Thank you for joining this discussion on the role of science in international AI governance. We are barreling into the unknown. AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it, let alone govern it. AI does not stop at borders, and no nation can fully grasp its implications on its own. If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype or disinformation. We need facts we can trust and share across countries and across sectors. Less noise, more knowledge. That is why the United Nations is building a practical architecture that puts science at the center of international cooperation on AI.

Thank you for watching. and it starts with the Independent International Scientific Panel on Artificial Intelligence. This panel is designed to help close the AI knowledge gap and assess the real impacts of AI across economies and societies so countries at every level of AI capacity can act with the same clarity. It is fully independent, it is globally diverse, and it is multidisciplinary because AI touches every area of every society. And I’m delighted that the General Assembly of the United Nations confirmed the 40 experts I proposed to member states. Now the real work begins on a fast track to deliver a first report ahead of the Global Summit. The Global Dialogue on AI Governance in July. The panel will provide a shared baseline of analysis.

helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so policy is neither a blunt instrument that stifles progress nor a bystander to harm. That is how science transcends decision -making. When we understand what systems can do and what they cannot, we can move from rough measures to smarter, risk -based guardrails. Guardrails that protect people, uphold human rights, and preserve human agency. Guardrails that build confidence and give business clarity so innovation can move faster in the right direction. Science -led governance. Governance is not a brake on progress. It is an accelerator for solutions. A way to make progress safer, fairer, and more widely shared. It helps us identify where AI can do the most good the fastest.

And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countries can prepare, protect, and invest in people. Today, international cooperation is difficult. Trust is strained, and technological rivalry is growing. Without a common baseline, fragmentation wins, with different regions and different countries operating under incompatible policies and technical standards. A patchwork of rules will raise costs, weaken safety, and widen divides. Science is a universal language. Guided by the independent panel and the global dialogue on AI governance, we can align with the world. We can align our technical baselines. When we agree on how to test systems and measure risk, we create interoperability. So a start -up in New Delhi can scale globally with confidence because the benchmarks are shared, and safety can travel with the technology.

Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not a slogan. And that requires meaningful human oversight in every high -stakes decision, injustice, health care, credit. And it requires clear accountability so responsibility is never outsourced to an algorithm. People must understand how decisions are made, challenge them, and get answers. Excellent. Thank you, ladies and gentlemen. The message is simple. Less hype, less fear. More facts and evidence. Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals. Let us build a future where policy is as smart as the technology it seeks to guide. Thank you.

Anil Ananthaswamy

Thank you, Secretary General, for those inspiring opening remarks. Ladies and gentlemen, we were going to have Mr. Brad Smith. Vice Chair and President of Microsoft Corporation as our next speaker, but he’s running a bit late, so we will move to the next item in the agenda. I would like to welcome Professor Yashwa Bengio to the stage, Scientific Director of MILA and one of the world’s leading AI researchers. He and I will be in a fireside chat and we’re hoping that Mr. Brad Smith will be able to join us very soon. Thank you. So, welcome Professor Bengio.

Yoshua Bengio

Thank you for having me.

Anil Ananthaswamy

Our pleasure. So, you are the most cited computer scientist And I looked it up. You’re actually the most cited living scientist today and have played a unique role at the global science policy interface, including through the UN Scientific Advisory Board and your leadership of the International AI Safety Report. So from your perspective, how do these science policy interfaces actually work in practice and where do they add the most value?

Yoshua Bengio

So it’s tricky, right, because there are many different views, especially different interests in business, in different governments. And the role of science, the role of a kind of synthesis of science that we want for the UN panel, that we have seeked for the… AI Safety Report. is to try to make it, to provide a shared understanding as a basis for those political discussions and not be influenced by as much as is humanly possible by those tensions that exist in our societies. And I think it’s particularly important because maybe unlike in the case of climate, the scientists themselves don’t always agree on what to expect for the future or even how to interpret the science that exists.

I just want to add something. So something that’s a little bit subtle about this kind of exercise is that to be able to recognize the uncertainty and the divergences that exist, and where is it that scientists agree, where is it that the evidence is strong, where is it that we have clues that matter. Even if we’re not certain about a particular risk, we might have clues about it. But if the risk has huge severity, in other words, if it does unfold, then it could be catastrophic, then policymakers need to make attention. And it’s always difficult when we don’t have proof that something terrible is going to happen. Maybe a good analogy is tipping points in climate, right?

Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation is similar in AI in the sense that we don’t have the experience of, say, machines that are really smart and can change society, and be even potentially smarter than us. So how can we deal with the right policy decisions? but that’s why it is so important to have as neutral and as fact -based evaluation of what is going on available to everyone and in a language that is accessible to everyone and of course for policy makers which by the way is difficult for scientists to achieve they need help, they need iterations they need feedback from people who are used to the interface between science

Anil Ananthaswamy

Is there anything in particular about the highly technical nature of AI and also the pace of change that makes this interface particularly difficult?

Yoshua Bengio

Yes, yes The facts shown in the scientific benchmarks across labs, companies and academia on AI show very rapid growth of the capabilities of these systems and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and that growth is uneven so we see AIs even surpassing most people on some measurements of capability and being kind of stupid or like a six -year -old on some other things so it’s very difficult to grasp what that means but because it’s moving so fast there’s always going to be a lag between even like the scientific papers take time to be written if there are studies so think about studies that involve people they’re going to take months and so by the time we start seeing clues that there’s a potential problem so you can think of something recent that was not expected like the psychological effects on people of these chatbots we now have lots of anecdotal evidence and we’re only starting to see the scientific studies and of course on the policy side it’s going to be even more difficult even later because those discussions are going to happen after we see scientific evidence so there is going to be a lag and that’s a real problem because things could move

Anil Ananthaswamy

So maybe that leads well into our next question. We often hear that AI governance is moving too slowly and from your experience, what kinds of scientific assessments or benchmarks could realistically keep pace with this rapid change?

Yoshua Bengio

Yeah, that’s a great question. My opinion on this is that we should be thinking about not just policy and the usual sense of coming up with principles, but we should try to strive for high -level principles that can be applied without having to go into the details because the details are going to change. And the second thing is I think we should strive for technology that are going to help to implement those guardrails in the field, in the deployment of AI, because otherwise there’s not enough time to

Anil Ananthaswamy

Well, thank you for those insights. And also congratulations on your recent appointment to the Independent International Scientific Panel on AI. In a few words, how do you see this new panel helping to strengthen the link between science and global AI policymaking?

Yoshua Bengio

Well, I think there’s something really important about this panel, and it’s global aspect and being rooted in the UN. And the reason I’m saying this is that AI is going to be transforming our world very clearly, and it’s going to have global effects, whether it is on the good side, the benefits are on the risks, but also the kind of power relationships that are going to be changing in the future. And I’m personally very concerned about how this will unfold for developing countries in the global south. And we need to work. Thank you. in a multidisciplinary array so that we can foresee those effects and we can start discussions to make sure that everyone is at the table and no one is on the menu.

Anil Ananthaswamy

Well said, Professor Bengio. Well, thank you very much for kick -starting our discussion. We will now turn to our panel. So, ladies and gentlemen, it is essential that discussions about AI policy include the voices of key industry actors, and I am pleased to invite Mr. Brad Smith, Vice Chair and President, Microsoft Corporation, for his keynote address.

Brad Smith

Well, good morning, everyone. It’s a pleasure to be here. My apologies for being a few minutes late. I want to offer a couple of thoughts this morning. The first thing I think we should come together to think about is that, in my opinion, this is a moment in time when we need to reflect on and reinvest in the importance of the United Nations. There is a well -known economic theory that says that humanity is, in many ways, almost destined to repeat its great economic mistakes every 80 years. The reason it’s 80 years is because that is basically the lifespan of human beings. And so every 80 years, almost everyone who had any living memory of a prior financial calamity has left the planet.

If you look at the Great Recession that started in 2008, what you realize is that it happened 79 years after the stock market crash that led to the Great Depression in 1929. And you can follow this series of financial mistakes all the way back to the bursting of the tulip bubble in the Netherlands hundreds of years ago. I think there is a corollary worth thinking about. Just as there is a risk that humanity forgets the mistakes it made 80 years ago, humanity runs the risk of forgetting the great successes. it created 80 years ago. It was just over 80 years ago that the world came together to create the United Nations. It was, in my opinion, one of humanity’s greatest accomplishments of the 20th century.

It is a unique organization in a very imperfect world. And so, of course, on any day and any year, it is possible for anyone to blame the United Nations for the imperfections that we see all around us. But the truth is this. Those imperfections are fewer, and their consequences are less disastrous, in my view, because of the United Nations. And one of the great things about working at Microsoft in a job like Microsoft, in my opinion, is that I get to work in a global organization. We have subsidiaries in 120 countries. We do work in 190 countries. We see the world. It turns out that everywhere we go, we see the United Nations. Sometimes it’s the United Nations Development Program, working to foster economic development.

Sometimes it is UNHCR, helping refugees. Sometimes it is the UN Office of Human Rights, seeking to protect human rights. But the truth is, if there’s a problem, the United Nations is almost always part of the solution. We need to remember this. And we need to remember that however challenging the last 80 years have been, we have managed, as humanity, as a species, to live. with the ever -constant presence of nuclear weapons without using them or destroying ourselves. The United Nations has, in fact, in my view, been indispensable to not just the protection of people, but the preservation of our species. Why does that matter now? Why should we talk about it today and this week in Delhi?

Well, because here we are on the cusp of the future. A technology that we all know will likely change the future. Here we are in the second month of the second quarter of the 21st century, and we need to focus on how we bring the institutions on which we rely into that future. So then let me talk about a second aspect that I think is so important to think about this month. One of the things I’m constantly struck by… leading a global organization is how often everyone disagrees with each other about almost everything. But one of the things I’ve learned along the way is that I think one of the reasons people so quickly disagree is that we rush so quickly to debate competing solutions.

This happens in domestic politics. It happens in international diplomacy. It, frankly, happens in a global company. It actually happens everywhere, even in families. As soon as there’s a problem, people want to talk about the solution. And then people have different solutions, and then they debate, and they disagree, and they argue, and sometimes it’s even worse than that. One of the things I’ve learned is the reason people so often disagree about the solution is they don’t have a common understanding of the problem. They don’t spend enough time talking about the problem. They don’t have a shared contextual understanding. of the problem they’re trying to solve. They’re too quick to want to blame someone for the problem, and then that spirals into a discussion that becomes completely unconstructive.

Why does that matter today? Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going. This is an indispensable tool. Indeed, it’s a critical service for humanity so we can all learn together, we can all think together, we can all understand together what is going on in the world. I think it’s especially critical, to be honest, when it comes to artificial intelligence because I think if you even communicate, consider most of the conversations you have about this technology. I would argue that it has two flaws. The first flaw is it usually involves people making very grandiose predictions about the future.

You know what? I’ve worked in the tech sector for 32 years. I have listened for more than three decades to my colleagues in my industry around the world make bold predictions about the future. No one ever holds them accountable a decade later for whether they were right or wrong. I used the researcher agent in Microsoft Copilot a couple weekends ago, and I loaded a lot of names. I won’t say whom, but you can guess. And I said, look at all the predictions they made about all the technologies, and look at the predictions they made about when these technologies would come to do something or another, and give them a grade. The average grade was 25%. You couldn’t even get close to the top.

You were at the bottom. So let’s just understand one thing together. There is no such thing as a crystal ball. No one has one. But what we do have is the ability to understand where we are today. And what we do have is a better understanding to just appreciate what is happening each and every year. There is a second flaw, in my view, in many of the conversations that take place, including at this AI summit. Everybody wants to talk about how they’re going to make machines smarter. That’s interesting. I think it’s interesting to imagine living in a world where a data center is like a country of geniuses. But as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses.

We’re all geniuses already. What that should remind us… is that human capability is neither fixed nor finite. And so what really matters, in my opinion, is not whether we are going to build machines that are smarter than humans. Yes, in some ways we will. But how will we use those machines to make people smarter, to help us do what we need to do? That is what this effort is all about. Wow. Let’s harness the power of science to build a common understanding of what is changing each year, and then let’s connect it with the global dialogue on governance so we can pursue policies that will ensure that this technology serves people. There’s no better place to get started than here.

There’s no better time than now. And let’s face it, there is no better institution on the planet that can do more to serve humanity and protect the world. than the United Nations. And on behalf of Microsoft, I just want you to know we are putting our full energy and resources to do everything that we can to help. Thank you very much.

Anil Ananthaswamy

Thank you. Thank you, Mr. Smith, for those insights on responsibility, accountability, and the role of industry. We now turn to our panel. Our panel brings together scientific leadership, public policy expertise, and international coordination. Please welcome to the stage our speakers, Professor Balaraman Ravindran, IIT Madras, Swaminathan, former Chief Scientist, WHO, Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, and Anne Bouveraud, France’s Special Envoy for AI. I am also pleased to introduce our moderator, Amandeep Singh Gill, Undersecretary General and Special Envoy for Digital and Emerging Technologies. I invite him to guide the discussion. Thank you very much.

Amandeep Singh Gill

Thank you very much. Thank you, Anil, for leading us and for those who have not read his book, The Elegant Math Behind Machine Learning, please do have a go at it. We cannot govern something that is not possible. Something that we don’t understand. So something as simple as, like, if 90 % of AI is matrix multiplication, a 0 .01 % as he was explaining, improvement in efficiency of matrix multiplication has huge energy implications. So I want to welcome our esteemed panelists. The stage has been set by very inspiring keynotes and a fireside chat. So we will dive straight in. And since we are running a little short of time, I’m going to compress the two rounds into one rapid -fire round.

So all of you have worked on or are working on the science policy interface. And my sense is that there is a loop here, that there is a loop between science and evidence, and evidence and… and policy. And we want to explore that loop today in the context of the significant development of the setting up of the International Independent Scientific Panel at the United Nations. So I want to start with you, Soumya. You were the first chief scientist, first woman chief scientist at the WHO and worked at a very difficult time during the COVID when evidence, trusted evidence was so critical. So in your view, what makes this evidence that comes from science trusted and actionable for policymakers?

Soumya Swaminathan

The evidence is very rapid. The field is moving so rapidly. In COVID, we had to review a couple of hundred publications every day to understand what was happening on different aspects, on the virus, on the immunology, on how vaccines were working and drugs, and we had to make recommendations based on the best available evidence that day. I think we may be in a similar situation with AI, and it’s wonderful that the UN has now set up this body, which I see as something like the IPCC. I think we do need global governance. We need something like, you know, we’re talking now about preventing future pandemics by sharing data on pathogens, making sure that we have protocols in place where countries are willing to share that data, and also, of course, to share the tools, the vaccines or drugs when they become available, when or in case there is another pandemic.

Similarly, I hope that this scientific body that’s been set up by the UN would also establish systems that would, would link to national bodies and systems, and that would ensure the voices of all are heard. So one of the things during COVID was some of our recommendations were relevant. in high -income countries but not in low -income countries because the context is very different. And the WHO was criticized for this, I think rightfully so, and we need to learn from those mistakes. So it’s the voices, for example, of women, a low -income woman, a farmer in a remote place, is going to use technology very differently from a large farmer with access to lots of machines in Europe or North America.

So if AI has to work for everyone, then we need to make sure that those voices are heard. And ultimately, I think that loop you talked about, sometimes policy is made in advance of evidence. You have to. You can’t wait. But the policy must change. It must ask for the relevant evidence and be able to adapt when that is clear.

Amandeep Singh Gill

Thank you very much, Soumya. I’m going to come to you, Ravi, Professor Balaraman Rabindran. Now, as AI policies begin to take shape and you’ve been involved in some policymaking yourself, what signals from… regulators or public sector users should most urgently guide future AI research priorities? So in a sense, you know, the loop coming back into research.

Balaraman Ravindran

so thank you for that question so I mean AI right now especially in the global south so we don’t completely understand the implications of adopting AI and how is it going to affect the society, the people livelihood and everything in fact I also feel that we don’t have enough evidence about how AI is even affecting the social fabric how are children getting increasingly isolated with the adoption of AI and whether the effect is uniform between cities and rural India because the cultural setup is very different and so on and so forth so if the government as we heard our honourable prime minister say yesterday should focus more on youth and the impact of AI on youth what is the evidence do we have about what is happening in India so we hear stories about you know how there is dependence of on AI models of children and also people who are mentally challenged and so on and so forth who are under stress but all of these stories are coming to us from the west so what is it that’s happening in India so when we have these kinds of policy decisions that have to be made the government says that AI should be pushing efficiency in agriculture so do we have a benchmark in India that can evaluate the efficiency of effectiveness of these AI models in agriculture what are the kinds of flaws that happens when I for example build a bot that can act as a co -pilot for a farmer so these are bigger challenges so we have a lot of questions

Amandeep Singh Gill

if I can quickly follow up where do you actually see evidence for impact in the sustainable development goals space just a quick example or two

Balaraman Ravindran

so I I That was not in the notes he gave us earlier, so I have to think on my feet here. So let me take one thing that we are very familiar with, we are working on right now, is on the education space, right? So, for example, we don’t know, we don’t have evidence of AI interventions. How likely is it to change student learning behavior? So we have done some preliminary studies. So the author of the study is somewhere in the audience, because he has been sending me pictures of the stage. So what we have found out is the effectiveness of AI adoption is a direct function of habit. So if the students are using AI more, then they tend to…

But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI more, therefore they get better effect, or do they use AI more because they are getting better effect. So these are questions that we have to ask. Even in something as simple as education. I am saying simple because there is a lot of positive buzz around using AI in education. But even there, we need a lot more evidence to come.

Amandeep Singh Gill

Thank you, Ravi, and we’re honored to have you on the new International Independent Scientific Panel. So if I may jump to you, Anne, and you’re an AI scientist yourself. You know, all of us know you as a special envoy of President Macron, who made the February summit happen last year in Paris, but you’re also an AI scientist. So from your perspective, you kind of lived in these two worlds. So what works best for the interface? What kind of scientific evidence would you take to President Macron if you were to convince him to change the policy?

Anne Bouverot

Well, thank you for the question. I studied AI a long time ago, but I’m not really a scientist. But I try to understand, of course. Understanding, I think, is probably the very first thing. And before we move to policymakers, I think it’s for citizens, for us as human beings. The things that we don’t understand… We tend to be more afraid of. I often quote scientist Marie Curie. She wasn’t an AI scientist, but she’s one of the brightest scientists that we’ve had, two times Nobel laureate. And there’s a wonderful quote by her. She says, nothing in life is to be feared. Everything is to be understood. And now is the time to understand more because, of course, there are more things we can be afraid of at the time when she was living and now as well.

So trying to understand things, having scientific panels is definitely the right thing to do. And we’re fully supportive in France of the scientific panel. We’re very proud that Joëlle Barral is our nominee. She’s a scientist in AI and health and a member of the panel. This is absolutely excellent. So, yes, understanding things. is absolutely key. And then maybe just a second point to give an example of how understanding something or not can lead to very different policy decisions in the field of AI and work. We’ve had predictions. I remember in 2013, that was the previous AI revolution, but scientists, I believe, at Oxford said within 10 years, half of the jobs will disappear. We haven’t seen that.

At the AI summit in Bletchley Park, for very good reasons, we had frontier AI leaders in particular, Elon Musk saying within two years, half of the jobs will disappear. So, of course, the fact that this didn’t happen doesn’t mean that there isn’t a risk for work. Of course, there’s a risk for work. But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism. Basic income, what are we going to do with all the people who don’t have jobs? If what economists are saying is that 80 % of the jobs will be transformed, then the policy outcome is training, skilling, reskilling, and helping to educate people. That’s why listening to economists and having the International Labor Organization and other institutions really follow closely what is happening in which countries for younger people, for older people, for women, for men, for different types of jobs, that’s super

Amandeep Singh Gill

Merci beaucoup, Anne. Merci. And I’m going to turn to you, Professor Sood. You occupy an important position within the Indian system, and you look at science broadly. And India has deployed some of these technologies at societal scale. India stack the digital public infrastructure. So how do you look at the AI opportunity, and importantly, how do you look at AI risks? And how are you prioritizing R &D allocations to harness the opportunities, manage the risks?

Ajay Sood

Thank you very much for having me on the panel. As you know that all the aspects which you asked, we have had very extensive consultations across all stakeholders. And we came out with the National AI Governance Framework, not the regulatory framework, but how do we really handle AI, all aspects. And there we have looked at how do we enable the compute facility, compute resources to our people. Because we are not at the scale when a few trillions of dollars are being invested. So we came out with some framework which we think with public -private partnership we could enable it. And we could see the results of that within a year as demonstrated in AI Summit.

Summit, the release of AI, so on, models and so on. Other aspect which is very important, as you rightly said, the risk assessment. So this is where, as has been mentioned, our experience with the digital public infrastructure, which has been rolled on a very public scale with the safety and security, which is as difficult as in AI. AI, of course, is more difficult. We still do not know the risks. But when we were dealing with the digital public infrastructure, either for the financial transactions or for identity, identity verification and so on, it was a challenge. And that was done by embedding governance through technical design. And this is what we call techno legal, which Honorable Prime Minister said in the Paris summit.

And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everything is laid out. We will need framework for that. We will need technologies for that. But this is one way which will have a smooth. interaction if we can bring this technological framework.

Amandeep Singh Gill

Thank you so much for those insights. And now that since we are running out of time and I’m going to discriminate against the men on the panel so my apologies in advance. So I’m going to turn back to you Soumya and Aan for like 40 second, 30 second reflection. What do you think in terms of the pace and direction of technology opportunities including for accelerating scientific discovery and risk. What would be your advice for the international independent scientific panel maybe Anne you can go first 40 seconds.

Anne Bouverot

Yes I think AI has a strong potential for helping science we’ve seen that with the two Nobel prizes in physics and chemistry a year back. There’s many more areas in science where AI can help. It can only be possible if we have databases of scientific data that are available to the world and that are constructed by scientists and funded by governments and international institutions around the world. So this is a very important topic for research.

Amandeep Singh Gill

Thank you, Anne. Soumya, you have the last one.

Soumya Swaminathan

Yes, I agree very much with Anne. And I think that the scientific panel could actually help network many more groups of scientists from around the world, perhaps sectorally, for example, what’s happening in health, what’s happening in education, what’s happening in agriculture, looking at the evidences as they emerge, encouraging research, setting priorities, but also looking at safety and risks, because I think that’s going to be very important. There may be unanticipated risks and harms that we have not considered. And, of course, equity, being a UN -led panel, ensuring that equity is at the heart of AI and it’s being done for public good.

Amandeep Singh Gill

fantastic thank you that’s a great closing ladies and gentlemen please join me in thanking our outstanding panel and we are going to move straight to the closing over to you Anil

Anil Ananthaswamy

thank you to the panel for a rich and forward looking discussion to close this session it is my honor to invite Josephine Teal minister for digital development and information of Singapore to deliver the closing remarks minister Josephine Teal

Josephine Teo

good morning everyone first allow me to thank the secretary general for his remarks and it serves as a very useful guidance to all of us working in this important technology for the closing this morning I thought that it would perhaps be useful to offer a perspective from a small state Singapore has a population of just 6 million people and more than 30 years ago at the UN we became the convener of the Forum of Small States which still has about 108 members I will just make three points on how we look at developments on this front The first point is that we believe in AI being used as a force for the public good but to do so, it is important that we continue to invest in the science that underpins it and ground trust in evidence This certainly requires sustained investment in research and is also the reason why we set aside a billion dollars in a national AI R &D plan which will include foundational and applied research into responsible AI We believe in it and we have to put money behind this effort There are of course other investments such as in building up a digital trust center.

It’s our designated AI safety institute that has been participating in important conversations on this topic, as well as setting up a center for advanced technologies in online safety. So those are just some of the efforts that we can dedicate resources to doing as a small state. The second point I want to make is that there is almost always going to be a tension between moving quickly, given the pace of AI development, and moving carefully, giving the latest evidence that presents themselves on what we should be paying attention to. Both impulses are necessary, and we believe it is not impossible to try and balance them through integration of science and policy. It is not easy, but it is not an effort that we must give up on.

I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approaches that can also be interoperable across different jurisdictions. And this is one effort that we believe underpins the work that is being carried out by the UN. And this brings me to my third point. I want to highlight the important role that an organisation like the United Nations plays in facilitating global discourse to bridge science and policy. I cannot overemphasise the importance of this effort. We must recognise that global AI governance landscape is becoming increasingly fragmented. There are multiple initiatives, frameworks and institutions. The UN’s unique value lies in your legitimacy and inclusiveness to encourage interoperability across efforts.

The Secretary -General talked about this too. We therefore welcome… We welcome the establishment of the… independent international scientific panel on AI, building on the work of the UN High -Level Advisory Body on AI, which published its report on governing AI for humanity at the end of 2024. We note that the panel’s multidisciplinary approach, covering machine learning, applied AI, social science, ethics, all of these are necessary to address the complexity of AI governance challenges. Finally, I would just like to acknowledge that we now have substantial convergence on the high -level AI principles. Yoshua talked about this. Transparency, accountability, fairness, safety. But the challenge is in operationalizing them. We need to find standardized evaluation methodologies that work across different regulatory contexts.

We need capacity building so that all countries can meaningfully engage with the technical and the technical challenges. We need to work with the technical evidence and not just with the large AI research ecosystems. I would encourage all stakeholders to view scientific input not as a constraint on policy flexibility, but as a constraint on policy flexibility. as a foundation for more durable, effective governance that can maintain public trust. We need to keep the conversations going, one where science informs governance, and governance sharpens science. I would just perhaps end by highlighting Singapore’s continued commitment to contribute to advancing these discussions. We were very fortunate to host the International Scientific Exchange on AI Safety and to bring about the Singapore Consensus on Global AI Safety Research Priorities.

Joshua was in Singapore for this very momentous event. We will continue to participate in joint testing efforts of the International Network for Advanced AI Measurement, Evaluation and Science. We have organized two editions of the Singapore AI Safety Red Teaming Challenge, the first multicultural and multilingual AI safety red teaming exercise focused on the Asia -Pacific region. And as chair of the ASEAN Work Group on AI Governance, we have actively spearheaded efforts to foster a trusted environment in ASEAN by adapting global norms and best practices for ASEAN and in bringing about regional harmonization through the ASEAN Guide on AI Governance and Ethics, as well as expanding it to address the risk in generative AI. We are now working within ASEAN to explore practical tools for AI safety testing and aim to collectively develop a set of AI safety benchmarks that reflect our region’s concerns.

And finally, I’d like to welcome all colleagues to join us in Singapore for the second edition of the International Scientific Exchange, which we expect to take place on the 17th and 18th of May, and we look forward to furthering

Anil Ananthaswamy

Thank you very much once again. Thank you, Mr. Teo, for your closing remarks. This session is now concluded. Thank you very much. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
António Guterres
6 arguments110 words per minute653 words353 seconds
Argument 1
Science must be at the center of international AI cooperation to close knowledge gaps and provide shared baseline analysis
EXPLANATION
Guterres argues that effective AI governance requires facts rather than guesswork or hype, and that science-based cooperation is essential for understanding AI’s real impacts across economies and societies. He emphasizes that all countries, regardless of their AI capacity, need access to the same clarity through scientific analysis.
EVIDENCE
The establishment of the Independent International Scientific Panel on Artificial Intelligence with 40 experts confirmed by the UN General Assembly, designed to deliver a first report ahead of the Global Summit and Global Dialogue on AI Governance in July
MAJOR DISCUSSION POINT
The Role of Science in AI Governance
AGREED WITH
Yoshua Bengio, Brad Smith, Soumya Swaminathan, Anne Bouverot, Josephine Teo, Anil Ananthaswamy
Argument 2
AI does not stop at borders and requires global cooperation; the UN provides unique legitimacy and inclusiveness for AI governance
EXPLANATION
Guterres contends that AI’s global nature means no single nation can fully understand its implications alone, making international cooperation essential. He positions the UN as the ideal institution to facilitate this cooperation due to its universal membership and legitimacy.
EVIDENCE
The creation of practical architecture putting science at the center of international cooperation, including the Independent International Scientific Panel and Global Dialogue on AI Governance
MAJOR DISCUSSION POINT
International Cooperation and the UN’s Role
AGREED WITH
Brad Smith, Soumya Swaminathan, Josephine Teo
Argument 3
The panel will provide shared baseline analysis to help countries move from philosophical debates to technical coordination
EXPLANATION
Guterres explains that the scientific panel will shift discussions from abstract principles to concrete technical coordination by providing evidence-based analysis. This approach will anchor policy choices in evidence rather than speculation, enabling more effective governance.
EVIDENCE
The panel is fully independent, globally diverse, and multidisciplinary because AI touches every area of society; it will deliver analysis to help member states make evidence-based decisions
MAJOR DISCUSSION POINT
The Independent International Scientific Panel on AI
AGREED WITH
Yoshua Bengio, Soumya Swaminathan, Josephine Teo
Argument 4
AI innovation moves at light speed, outpacing collective ability to understand and govern it
EXPLANATION
Guterres highlights the unprecedented pace of AI development as a fundamental challenge for governance. He argues that this speed creates a gap between technological advancement and human understanding, making it difficult to develop appropriate policies.
EVIDENCE
The characterization of current situation as ‘barreling into the unknown’ with AI innovation moving faster than collective ability to understand or govern it
MAJOR DISCUSSION POINT
AI Development Pace and Policy Challenges
AGREED WITH
Yoshua Bengio, Soumya Swaminathan, Josephine Teo
Argument 5
Policy cannot be built on guesswork or hype but needs facts that can be trusted and shared across countries
EXPLANATION
Guterres emphasizes that effective AI governance must be grounded in reliable, evidence-based information rather than speculation or promotional claims. He advocates for ‘less noise, more knowledge’ as the foundation for sound policymaking.
EVIDENCE
The principle of ‘less noise, more knowledge’ and the establishment of scientific architecture to provide trustworthy facts across countries and sectors
MAJOR DISCUSSION POINT
Evidence-Based Policy Making
Argument 6
Science-led governance accelerates solutions and helps identify where AI can do the most good fastest
EXPLANATION
Guterres argues that science-based governance is not a constraint on progress but rather an accelerator that makes development safer, fairer, and more widely shared. It enables early identification of both opportunities and risks, allowing for proactive rather than reactive approaches.
EVIDENCE
The concept that science-led governance helps anticipate impacts early, from risks for children to labor markets to manipulation at scale, enabling countries to prepare, protect, and invest in people
MAJOR DISCUSSION POINT
Technology for Human Benefit
DISAGREED WITH
Brad Smith
Y
Yoshua Bengio
4 arguments141 words per minute828 words351 seconds
Argument 1
Scientific panels should provide neutral, fact-based evaluation accessible to everyone, especially policymakers
EXPLANATION
Bengio emphasizes the importance of synthesizing scientific knowledge in a way that transcends political and business interests to provide shared understanding. He notes that unlike climate science, AI scientists don’t always agree on future expectations, making neutral synthesis even more critical.
EVIDENCE
The role of synthesis in the UN panel and AI Safety Report to provide shared understanding as basis for political discussions, and the need for iterations and feedback from people experienced in science-policy interface
MAJOR DISCUSSION POINT
The Role of Science in AI Governance
AGREED WITH
António Guterres, Brad Smith, Soumya Swaminathan, Anne Bouverot, Josephine Teo, Anil Ananthaswamy
Argument 2
The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy
EXPLANATION
Bengio explains that AI systems show uneven development, surpassing humans in some areas while performing like children in others, making it hard to understand overall implications. The fast pace means there’s always a lag between scientific studies and policy responses.
EVIDENCE
Scientific benchmarks showing very rapid growth of AI capabilities across labs, companies and academia; example of psychological effects of chatbots where anecdotal evidence exists but scientific studies are just beginning; studies involving people take months while technology moves faster
MAJOR DISCUSSION POINT
AI Development Pace and Policy Challenges
AGREED WITH
António Guterres, Soumya Swaminathan, Josephine Teo
DISAGREED WITH
Josephine Teo
Argument 3
High-level principles should be applied without going into details since details change rapidly
EXPLANATION
Bengio advocates for developing governance frameworks based on broad principles that can be implemented without requiring constant updates to technical details. He also suggests developing technologies that can help implement guardrails in AI deployment.
EVIDENCE
The recommendation to strive for high-level principles and technology that help implement guardrails in AI deployment, given the rapid pace of change
MAJOR DISCUSSION POINT
Evidence-Based Policy Making
DISAGREED WITH
Anne Bouverot
Argument 4
The panel’s global and multidisciplinary aspect rooted in the UN is important for addressing power relationships and effects on developing countries
EXPLANATION
Bengio highlights the significance of the panel’s global scope and UN foundation for ensuring that AI’s transformative effects, both positive and negative, are considered from a worldwide perspective. He expresses particular concern about how AI developments will impact developing countries and the Global South.
EVIDENCE
Personal concern about how AI will unfold for developing countries in the Global South and the need for multidisciplinary work to foresee effects and ensure ‘everyone is at the table and no one is on the menu’
MAJOR DISCUSSION POINT
The Independent International Scientific Panel on AI
AGREED WITH
Soumya Swaminathan, Anne Bouverot, Balaraman Ravindran
B
Brad Smith
3 arguments132 words per minute1339 words606 seconds
Argument 1
The UN represents one of humanity’s greatest accomplishments and remains indispensable for global problem-solving
EXPLANATION
Smith argues that just as humanity risks forgetting economic mistakes every 80 years, it also risks forgetting great successes like the UN’s creation. He contends that despite imperfections, the world is better because of the UN’s existence and work across multiple domains.
EVIDENCE
Microsoft’s global presence in 120 countries and work in 190 countries showing UN involvement everywhere there are problems; examples of UNDP, UNHCR, and UN Office of Human Rights; the UN’s role in preventing nuclear weapons use over 80 years
MAJOR DISCUSSION POINT
International Cooperation and the UN’s Role
AGREED WITH
António Guterres, Soumya Swaminathan, Josephine Teo
Argument 2
People often disagree on solutions because they lack common understanding of problems; science helps create shared contextual understanding
EXPLANATION
Smith observes that disagreements often arise not from fundamental differences but from rushing to debate solutions without first establishing a shared understanding of the underlying problems. He argues that science-based common understanding is essential for constructive dialogue.
EVIDENCE
Observation that disagreements happen in domestic politics, international diplomacy, global companies, and families when people rush to solutions without understanding problems; the role of the scientific panel in creating common understanding based on science
MAJOR DISCUSSION POINT
Evidence-Based Policy Making
AGREED WITH
António Guterres, Yoshua Bengio, Soumya Swaminathan, Anne Bouverot, Josephine Teo, Anil Ananthaswamy
Argument 3
The goal should be using machines to make people smarter rather than just building smarter machines
EXPLANATION
Smith argues that the focus should shift from creating machines that surpass human intelligence to using AI to enhance human capabilities. He draws an analogy to how people today are like geniuses compared to those in the Bronze Age, suggesting human capability is neither fixed nor finite.
EVIDENCE
Analogy that compared to people in the Bronze Age, ‘we’re all geniuses already’; the concept of a data center being ‘like a country of geniuses’ and using that to help people do what they need to do
MAJOR DISCUSSION POINT
Technology for Human Benefit
DISAGREED WITH
António Guterres
S
Soumya Swaminathan
3 arguments180 words per minute451 words149 seconds
Argument 1
Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response
EXPLANATION
Swaminathan draws parallels between AI governance and pandemic response, noting that during COVID, WHO had to review hundreds of publications daily to make evidence-based recommendations. She suggests AI governance may require similar rapid evidence processing and policy adaptation.
EVIDENCE
WHO’s experience during COVID of reviewing ‘a couple of hundred publications every day’ to understand virus, immunology, vaccines, and drugs, making recommendations based on best available evidence that day
MAJOR DISCUSSION POINT
The Role of Science in AI Governance
AGREED WITH
António Guterres, Yoshua Bengio, Josephine Teo
Argument 2
Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries
EXPLANATION
Swaminathan emphasizes the need for global AI governance systems that connect to national bodies and ensure inclusive participation. She stresses that AI governance must account for different contexts, particularly the needs and perspectives of developing countries and marginalized groups.
EVIDENCE
WHO’s experience during COVID where some recommendations were relevant in high-income countries but not low-income countries due to different contexts; example of how a low-income woman farmer in a remote place will use technology differently from a large farmer in Europe or North America
MAJOR DISCUSSION POINT
International Cooperation and the UN’s Role
AGREED WITH
Yoshua Bengio, Anne Bouverot, Balaraman Ravindran
Argument 3
The panel should network scientists globally, set research priorities, and ensure equity is at the heart of AI development
EXPLANATION
Swaminathan envisions the scientific panel as a networking hub that connects researchers worldwide across different sectors while maintaining focus on safety, risks, and equity. She emphasizes that as a UN-led initiative, the panel must prioritize public good and equitable outcomes.
EVIDENCE
Suggestion for sectoral networking (health, education, agriculture), looking at evidence as it emerges, encouraging research, setting priorities, and addressing unanticipated risks and harms
MAJOR DISCUSSION POINT
The Independent International Scientific Panel on AI
AGREED WITH
António Guterres, Yoshua Bengio, Josephine Teo
A
Anne Bouverot
2 arguments142 words per minute501 words211 seconds
Argument 1
Understanding AI through scientific evidence is essential before making policy decisions, as fear comes from lack of understanding
EXPLANATION
Bouverot emphasizes that understanding must precede policymaking, both for citizens and policymakers, as people tend to fear what they don’t understand. She advocates for scientific panels as the right approach to building this understanding.
EVIDENCE
Quote from Marie Curie: ‘nothing in life is to be feared. Everything is to be understood. And now is the time to understand more’; France’s support for the scientific panel and nomination of Joëlle Barral, an AI and health scientist
MAJOR DISCUSSION POINT
The Role of Science in AI Governance
AGREED WITH
António Guterres, Yoshua Bengio, Brad Smith, Soumya Swaminathan, Josephine Teo, Anil Ananthaswamy
Argument 2
Different predictions about job displacement lead to different policy outcomes – universal basic income versus training and reskilling
EXPLANATION
Bouverot illustrates how varying scientific predictions about AI’s impact on employment lead to fundamentally different policy approaches. She contrasts predictions of job elimination (leading to universal basic income policies) with job transformation predictions (leading to training and reskilling policies).
EVIDENCE
2013 Oxford prediction that half of jobs would disappear within 10 years; Elon Musk’s prediction at Bletchley Park that half of jobs would disappear within two years; economists’ view that 80% of jobs will be transformed rather than eliminated
MAJOR DISCUSSION POINT
AI Impact Assessment and Research Priorities
AGREED WITH
Yoshua Bengio, Soumya Swaminathan, Balaraman Ravindran
DISAGREED WITH
Yoshua Bengio
A
Ajay Sood
1 argument138 words per minute304 words131 seconds
Argument 1
India’s experience with digital public infrastructure demonstrates embedding governance through technical design
EXPLANATION
Sood explains India’s approach of integrating governance directly into technical systems design, which they call ‘techno-legal.’ He argues this approach, successfully used in digital public infrastructure for financial transactions and identity verification, could be applied to AI governance.
EVIDENCE
India’s experience with digital public infrastructure rolled out at public scale with safety and security for financial transactions and identity verification; the concept of ‘techno-legal’ mentioned by the Prime Minister in Paris summit
MAJOR DISCUSSION POINT
Evidence-Based Policy Making
B
Balaraman Ravindran
2 arguments169 words per minute483 words171 seconds
Argument 1
We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India
EXPLANATION
Ravindran points out significant gaps in understanding how AI affects society, particularly in the Global South where cultural contexts differ from the West. He emphasizes the need for India-specific evidence about AI’s impact on youth, mental health, and social isolation.
EVIDENCE
Lack of evidence about AI’s effects on social fabric, children’s isolation, and whether effects are uniform between cities and rural India due to different cultural setups; stories about AI dependence coming from the West rather than India-specific research
MAJOR DISCUSSION POINT
AI Impact Assessment and Research Priorities
AGREED WITH
Yoshua Bengio, Soumya Swaminathan, Anne Bouverot
Argument 2
Research should focus on understanding AI’s effectiveness in specific contexts like agriculture and education, with proper benchmarking
EXPLANATION
Ravindran argues for developing India-specific benchmarks to evaluate AI effectiveness in key sectors like agriculture and education. He emphasizes the need to understand potential flaws and limitations of AI applications in these contexts before widespread deployment.
EVIDENCE
Need for benchmarks in India to evaluate AI effectiveness in agriculture; questions about building bots as co-pilots for farmers; preliminary studies on AI in education showing effectiveness as a function of habit, but uncertainty about causal factors
MAJOR DISCUSSION POINT
AI Impact Assessment and Research Priorities
J
Josephine Teo
5 arguments140 words per minute901 words385 seconds
Argument 1
There’s tension between moving quickly given AI’s pace and moving carefully based on evidence, but both impulses are necessary
EXPLANATION
Teo acknowledges the inherent challenge in AI governance of balancing speed with caution, given the rapid pace of AI development versus the need for evidence-based decision making. She argues that while difficult, this balance is achievable through integration of science and policy.
EVIDENCE
Recognition that balancing quick movement with careful evidence-based approach is ‘not easy, but it is not an effort that we must give up on’; emphasis on international cooperation for interoperable approaches
MAJOR DISCUSSION POINT
AI Development Pace and Policy Challenges
AGREED WITH
António Guterres, Yoshua Bengio, Soumya Swaminathan
DISAGREED WITH
Yoshua Bengio
Argument 2
The UN’s role is crucial in facilitating global discourse and preventing fragmentation in AI governance landscape
EXPLANATION
Teo emphasizes the UN’s unique legitimacy and inclusiveness in addressing the increasingly fragmented global AI governance landscape. She argues that the UN’s value lies in encouraging interoperability across multiple initiatives, frameworks, and institutions.
EVIDENCE
Recognition of multiple initiatives, frameworks and institutions creating fragmentation; the UN’s ‘unique value lies in your legitimacy and inclusiveness to encourage interoperability across efforts’
MAJOR DISCUSSION POINT
International Cooperation and the UN’s Role
AGREED WITH
António Guterres, Brad Smith, Soumya Swaminathan
Argument 3
The panel’s multidisciplinary approach covering machine learning, social science, and ethics is necessary for complex AI governance challenges
EXPLANATION
Teo supports the comprehensive scope of the Independent International Scientific Panel, arguing that AI governance challenges require expertise across technical, social, and ethical domains. She views this multidisciplinary approach as essential for addressing AI’s complexity.
EVIDENCE
The panel’s coverage of ‘machine learning, applied AI, social science, ethics’ as necessary to address complexity of AI governance challenges; building on UN High-Level Advisory Body on AI work
MAJOR DISCUSSION POINT
The Independent International Scientific Panel on AI
AGREED WITH
António Guterres, Yoshua Bengio, Soumya Swaminathan
Argument 4
Scientific input should be viewed as a foundation for effective governance rather than a constraint on policy flexibility
EXPLANATION
Teo argues for reframing the relationship between science and policy, positioning scientific evidence not as a limitation but as an enabling foundation for more durable and effective governance that maintains public trust.
EVIDENCE
Encouragement to view scientific input ‘not as a constraint on policy flexibility, but as a foundation for more durable, effective governance that can maintain public trust’
MAJOR DISCUSSION POINT
The Role of Science in AI Governance
AGREED WITH
António Guterres, Yoshua Bengio, Brad Smith, Soumya Swaminathan, Anne Bouverot, Anil Ananthaswamy
Argument 5
Small states like Singapore can contribute through dedicated investments in AI research, safety institutes, and regional cooperation
EXPLANATION
Teo demonstrates how smaller nations can play significant roles in global AI governance through targeted investments and regional leadership. She outlines Singapore’s comprehensive approach including research funding, safety institutions, and regional coordination efforts.
EVIDENCE
Singapore’s billion-dollar national AI R&D plan, digital trust center as AI safety institute, center for advanced technologies in online safety, hosting International Scientific Exchange on AI Safety, Singapore Consensus on Global AI Safety Research Priorities, AI Safety Red Teaming Challenge, and ASEAN leadership on AI governance
MAJOR DISCUSSION POINT
Capacity Building and Inclusivity
A
Anil Ananthaswamy
1 argument53 words per minute568 words640 seconds
Argument 1
We cannot govern what we do not understand – understanding technical details is crucial for effective AI governance
EXPLANATION
Ananthaswamy emphasizes that effective governance requires deep technical understanding, using the example that if 90% of AI is matrix multiplication, even a 0.01% improvement in efficiency has huge energy implications. This highlights how seemingly small technical details can have massive real-world consequences that policymakers need to comprehend.
EVIDENCE
Example that 90% of AI is matrix multiplication and a 0.01% improvement in efficiency of matrix multiplication has huge energy implications
MAJOR DISCUSSION POINT
The Role of Science in AI Governance
AGREED WITH
António Guterres, Yoshua Bengio, Brad Smith, Soumya Swaminathan, Anne Bouverot, Josephine Teo
A
Amandeep Singh Gill
2 arguments136 words per minute644 words283 seconds
Argument 1
There is a critical loop between science and evidence, and evidence and policy that needs to be explored and strengthened
EXPLANATION
Gill identifies a fundamental relationship where science generates evidence, which then informs policy decisions, which in turn should guide future scientific research priorities. He emphasizes the importance of understanding and optimizing this feedback loop, particularly in the context of the new UN International Independent Scientific Panel on AI.
EVIDENCE
The establishment of the International Independent Scientific Panel at the United Nations as a significant development in strengthening the science-policy interface
MAJOR DISCUSSION POINT
The Role of Science in AI Governance
Argument 2
Trusted evidence during crises like COVID demonstrates the critical importance of rapid, evidence-based decision making for emerging technologies
EXPLANATION
Gill draws parallels between the COVID pandemic response and AI governance, noting how trusted scientific evidence was essential during the health crisis. He suggests that similar evidence-based approaches are needed for AI governance, where rapid technological change requires quick but informed policy responses.
EVIDENCE
Reference to WHO’s experience during COVID when evidence and trusted evidence was critical for decision-making
MAJOR DISCUSSION POINT
Evidence-Based Policy Making
Agreements
Agreement Points
Science must be central to AI governance and policy-making
Speakers: António Guterres, Yoshua Bengio, Brad Smith, Soumya Swaminathan, Anne Bouverot, Josephine Teo, Anil Ananthaswamy
Science must be at the center of international AI cooperation to close knowledge gaps and provide shared baseline analysis Scientific panels should provide neutral, fact-based evaluation accessible to everyone, especially policymakers People often disagree on solutions because they lack common understanding of problems; science helps create shared contextual understanding Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response Understanding AI through scientific evidence is essential before making policy decisions, as fear comes from lack of understanding Scientific input should be viewed as a foundation for effective governance rather than a constraint on policy flexibility We cannot govern what we do not understand – understanding technical details is crucial for effective AI governance
All speakers strongly agreed that scientific evidence and understanding must form the foundation of AI governance, moving away from hype, guesswork, and fear-based approaches toward fact-based policy-making
The UN’s unique role and legitimacy in global AI governance
Speakers: António Guterres, Brad Smith, Soumya Swaminathan, Josephine Teo
AI does not stop at borders and requires global cooperation; the UN provides unique legitimacy and inclusiveness for AI governance The UN represents one of humanity’s greatest accomplishments and remains indispensable for global problem-solving Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries The UN’s role is crucial in facilitating global discourse and preventing fragmentation in AI governance landscape
Speakers unanimously supported the UN’s central role in AI governance, emphasizing its unique legitimacy, global reach, and ability to prevent fragmentation while ensuring inclusive participation
The Independent International Scientific Panel on AI is a crucial development
Speakers: António Guterres, Yoshua Bengio, Soumya Swaminathan, Josephine Teo
The panel will provide shared baseline analysis to help countries move from philosophical debates to technical coordination The panel’s global and multidisciplinary aspect rooted in the UN is important for addressing power relationships and effects on developing countries The panel should network scientists globally, set research priorities, and ensure equity is at the heart of AI development The panel’s multidisciplinary approach covering machine learning, social science, and ethics is necessary for complex AI governance challenges
All speakers viewed the establishment of the Independent International Scientific Panel on AI as a significant and necessary step for global AI governance, emphasizing its multidisciplinary nature and global scope
AI development pace creates governance challenges requiring rapid adaptation
Speakers: António Guterres, Yoshua Bengio, Soumya Swaminathan, Josephine Teo
AI innovation moves at light speed, outpacing collective ability to understand and govern it The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response There’s tension between moving quickly given AI’s pace and moving carefully based on evidence, but both impulses are necessary
Speakers agreed that the unprecedented pace of AI development creates fundamental challenges for governance, requiring new approaches that can adapt quickly while maintaining evidence-based decision-making
Need for inclusive global participation, especially from developing countries
Speakers: Yoshua Bengio, Soumya Swaminathan, Anne Bouverot, Balaraman Ravindran
The panel’s global and multidisciplinary aspect rooted in the UN is important for addressing power relationships and effects on developing countries Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries Different predictions about job displacement lead to different policy outcomes – universal basic income versus training and reskilling We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India
Speakers emphasized the critical importance of ensuring that AI governance includes diverse global perspectives, particularly from developing countries and the Global South, recognizing that AI impacts vary significantly across different contexts
Similar Viewpoints
Both emphasized the need to move beyond hype and speculation toward evidence-based understanding as the foundation for effective policy-making
Speakers: António Guterres, Brad Smith
Policy cannot be built on guesswork or hype but needs facts that can be trusted and shared across countries People often disagree on solutions because they lack common understanding of problems; science helps create shared contextual understanding
Both drew parallels between AI governance challenges and pandemic response, emphasizing the need for rapid evidence processing and policy adaptation in fast-moving technological contexts
Speakers: Yoshua Bengio, Soumya Swaminathan
The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response
Both emphasized that AI should serve humanity and enhance human capabilities rather than simply pursuing technological advancement for its own sake
Speakers: Brad Smith, António Guterres
The goal should be using machines to make people smarter rather than just building smarter machines Science-led governance accelerates solutions and helps identify where AI can do the most good fastest
Both highlighted the need for context-specific evidence and governance approaches that account for different cultural, economic, and social contexts, particularly in developing countries
Speakers: Soumya Swaminathan, Balaraman Ravindran
Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India
Unexpected Consensus
Technology industry leader strongly supporting UN multilateralism
Speakers: Brad Smith
The UN represents one of humanity’s greatest accomplishments and remains indispensable for global problem-solving
It was notable that a major technology industry executive provided such strong endorsement of the UN’s role and multilateral approaches, given that tech companies often prefer less regulated environments
Agreement on the limitations of AI predictions and the need for humility
Speakers: Brad Smith, Anne Bouverot
The goal should be using machines to make people smarter rather than just building smarter machines Different predictions about job displacement lead to different policy outcomes – universal basic income versus training and reskilling
Both speakers, despite coming from different sectors, showed remarkable agreement on the unreliability of AI predictions and the need for more measured, evidence-based approaches rather than grandiose claims
Small state leadership in global AI governance
Speakers: Josephine Teo
Small states like Singapore can contribute through dedicated investments in AI research, safety institutes, and regional cooperation
The comprehensive leadership role that Singapore, as a small state, is taking in global AI governance was unexpected, demonstrating that meaningful contributions don’t require being a major power
Overall Assessment

There was remarkably strong consensus among all speakers on the fundamental principles of AI governance: the centrality of science, the importance of evidence-based policy-making, the UN’s crucial role, the need for global cooperation, and the importance of inclusive participation. The main areas of agreement included the establishment of the Independent International Scientific Panel, the challenges posed by AI’s rapid development pace, and the need to ensure AI serves humanity rather than the reverse.

Very high level of consensus with no significant disagreements identified. This strong alignment suggests a mature understanding of AI governance challenges and broad support for the UN-led approach. The implications are positive for advancing global AI governance, as the lack of fundamental disagreements among diverse stakeholders (government officials, scientists, industry leaders, international organizations) indicates a solid foundation for collaborative action and policy development.

Differences
Different Viewpoints
Speed vs. caution in AI governance implementation
Speakers: Josephine Teo, Yoshua Bengio
There’s tension between moving quickly given AI’s pace and moving carefully based on evidence, but both impulses are necessary The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy
Teo emphasizes balancing speed with evidence-based caution as achievable, while Bengio highlights the inherent lag problem between rapid AI development and policy response as a fundamental challenge
Policy approach – detailed vs. high-level principles
Speakers: Yoshua Bengio, Anne Bouverot
High-level principles should be applied without going into details since details change rapidly Different predictions about job displacement lead to different policy outcomes – universal basic income versus training and reskilling
Bengio advocates for broad principles that avoid technical details due to rapid change, while Bouverot emphasizes the importance of specific evidence and predictions to determine concrete policy approaches
Focus on machine intelligence vs. human enhancement
Speakers: Brad Smith, António Guterres
The goal should be using machines to make people smarter rather than just building smarter machines Science-led governance accelerates solutions and helps identify where AI can do the most good fastest
Smith focuses specifically on using AI to enhance human capabilities rather than just building smarter machines, while Guterres emphasizes using science to optimize AI deployment for maximum benefit without specifically prioritizing human enhancement
Unexpected Differences
Evidence requirements for policy action
Speakers: Soumya Swaminathan, Yoshua Bengio
Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy
Unexpectedly, both speakers acknowledge rapid change but reach different conclusions – Swaminathan suggests COVID-like rapid evidence processing can work for AI, while Bengio sees the lag as an inherent problem requiring different approaches
Role of technical details in governance
Speakers: Anil Ananthaswamy, Yoshua Bengio
We cannot govern what we do not understand – understanding technical details is crucial for effective AI governance High-level principles should be applied without going into details since details change rapidly
Surprising disagreement between the moderator and a key panelist on whether technical details are essential for governance or should be avoided due to rapid change
Overall Assessment

The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising around implementation approaches rather than core principles. Main areas of disagreement included the balance between speed and caution in governance, the level of technical detail needed in policy frameworks, and whether to focus on machine intelligence or human enhancement.

Low to moderate disagreement level with high consensus on core principles but divergent views on implementation strategies. This suggests strong foundation for cooperation but potential challenges in developing specific governance mechanisms and timelines.

Partial Agreements
All agree on the fundamental need for science-based AI governance, but differ on implementation approaches – Guterres emphasizes institutional frameworks, Swaminathan focuses on rapid evidence processing similar to pandemic response, and Bouverot prioritizes understanding before action
Speakers: António Guterres, Soumya Swaminathan, Anne Bouverot
Science must be at the center of international AI cooperation to close knowledge gaps and provide shared baseline analysis Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response Understanding AI through scientific evidence is essential before making policy decisions, as fear comes from lack of understanding
Both emphasize the need for inclusive AI governance that considers developing country perspectives, but Swaminathan focuses on institutional mechanisms for inclusion while Ravindran emphasizes the evidence gap about AI impacts in specific cultural contexts
Speakers: Soumya Swaminathan, Balaraman Ravindran
Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India
Both support the UN panel’s multidisciplinary approach, but Bengio emphasizes global power dynamics and developing country impacts while Teo focuses on technical complexity requiring diverse expertise
Speakers: Yoshua Bengio, Josephine Teo
The panel’s global and multidisciplinary aspect rooted in the UN is important for addressing power relationships and effects on developing countries The panel’s multidisciplinary approach covering machine learning, social science, and ethics is necessary for complex AI governance challenges
Takeaways
Key takeaways
Science must be at the center of international AI cooperation to provide shared baseline analysis and close knowledge gaps between countries The newly established Independent International Scientific Panel on AI will serve as a crucial bridge between scientific evidence and global AI policymaking AI governance requires balancing the need to move quickly with the pace of technology development while moving carefully based on emerging evidence The UN’s unique legitimacy and inclusiveness makes it indispensable for preventing fragmentation in the global AI governance landscape Evidence-based policy making is essential – policies cannot be built on hype or guesswork but must be grounded in facts that can be trusted and shared across countries AI development should focus on using machines to make people smarter rather than just building smarter machines Global AI governance must be inclusive, ensuring voices from developing countries and marginalized communities are heard in policy discussions High-level AI principles (transparency, accountability, fairness, safety) have substantial convergence, but the challenge lies in operationalizing them across different contexts
Resolutions and action items
The UN General Assembly confirmed 40 experts for the Independent International Scientific Panel on AI, with work beginning immediately The panel will deliver its first report ahead of the Global Summit and Global Dialogue on AI Governance in July Singapore will host the second International Scientific Exchange on AI Safety on May 17-18 ASEAN will work collectively to develop AI safety benchmarks reflecting regional concerns India will continue implementing its National AI Governance Framework with public-private partnerships for compute resources Microsoft committed to putting full energy and resources to support UN AI governance efforts Countries should invest in standardized evaluation methodologies that work across different regulatory contexts
Unresolved issues
Lack of sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts, particularly in developing countries Uncertainty about causal relationships in AI effectiveness (e.g., whether students use AI more because it’s effective or become more effective because they use it more) Insufficient benchmarking systems to evaluate AI effectiveness in specific sectors like agriculture and education in different national contexts Potential unanticipated risks and harms from AI that have not yet been considered or studied The challenge of keeping scientific assessments and policy responses in pace with rapid AI development How to ensure AI benefits are widely shared and don’t exacerbate global inequalities The need for capacity building so all countries can meaningfully engage with technical AI challenges
Suggested compromises
Balancing the tension between moving quickly (given AI’s rapid pace) and moving carefully (based on emerging evidence) through integration of science and policy Developing high-level principles that can be applied without going into technical details, since details change rapidly Using technology-enabled guardrails for AI deployment rather than relying solely on traditional policy mechanisms Adopting a ‘techno-legal’ approach that embeds governance through technical design, as demonstrated by India’s digital public infrastructure Creating interoperable approaches across different jurisdictions while allowing for local adaptation Viewing scientific input as a foundation for effective governance rather than a constraint on policy flexibility
Thought Provoking Comments
We cannot govern what we do not understand… AI does not stop at borders, and no nation can fully grasp its implications on its own. If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype or disinformation. We need facts we can trust and share across countries and across sectors. Less noise, more knowledge.
This comment established the fundamental premise that effective governance requires deep understanding, not speculation. It reframed AI governance from a reactive, fear-based approach to a proactive, evidence-based one. The phrase ‘less noise, more knowledge’ became a recurring theme throughout the discussion.
This opening statement set the entire tone for the session, establishing science-based evidence as the cornerstone of all subsequent discussions. Every speaker referenced back to this need for factual understanding over speculation, making it the foundational framework for the conversation.
Speaker: António Guterres
Maybe unlike in the case of climate, the scientists themselves don’t always agree on what to expect for the future or even how to interpret the science that exists… Even if we’re not certain about a particular risk, we might have clues about it. But if the risk has huge severity, in other words, if it does unfold, then it could be catastrophic, then policymakers need to make attention.
This comment introduced crucial nuance by acknowledging that AI governance faces unique challenges compared to other scientific policy areas like climate change. It highlighted the paradox of needing to act on uncertain but potentially catastrophic risks, introducing the concept of precautionary governance based on severity rather than certainty.
This shifted the discussion from seeking definitive answers to managing uncertainty intelligently. It influenced subsequent speakers to focus on adaptive governance frameworks and the importance of continuous monitoring rather than waiting for complete scientific consensus.
Speaker: Yoshua Bengio
There is a well-known economic theory that says that humanity is, in many ways, almost destined to repeat its great economic mistakes every 80 years… Just as there is a risk that humanity forgets the mistakes it made 80 years ago, humanity runs the risk of forgetting the great successes it created 80 years ago. It was just over 80 years ago that the world came together to create the United Nations.
This historical perspective was profound because it recontextualized current AI governance challenges within the broader arc of human institutional memory and cooperation. It elevated the discussion from technical policy details to fundamental questions about how humanity learns from history and builds lasting institutions.
This comment fundamentally shifted the conversation’s scope, moving it from immediate technical concerns to long-term institutional thinking. It provided historical legitimacy for multilateral approaches and influenced subsequent speakers to emphasize the UN’s unique role in global coordination.
Speaker: Brad Smith
One of the reasons people so often disagree about the solution is they don’t have a common understanding of the problem. They don’t spend enough time talking about the problem… Why does that matter today? Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going.
This insight identified a fundamental flaw in policy discussions – the tendency to jump to solutions without establishing shared problem definitions. It provided a meta-framework for understanding why AI governance is so contentious and positioned scientific assessment as the solution to this foundational issue.
This comment reframed the entire purpose of the scientific panel from providing answers to creating shared understanding of questions. It influenced the panel discussion to focus more on evidence-gathering methodologies and less on prescriptive solutions.
Speaker: Brad Smith
In COVID, we had to review a couple of hundred publications every day to understand what was happening… I think we may be in a similar situation with AI… Some of our recommendations were relevant in high-income countries but not in low-income countries because the context is very different… If AI has to work for everyone, then we need to make sure that those voices are heard.
This comment drew powerful parallels between pandemic response and AI governance while highlighting the critical issue of contextual relevance. It introduced the concept that universal technologies require locally-informed governance, challenging one-size-fits-all approaches.
This shifted the discussion toward inclusive governance models and highlighted the importance of diverse perspectives in scientific assessment. It influenced subsequent speakers to emphasize regional differences and the need for culturally-sensitive approaches to AI governance.
Speaker: Soumya Swaminathan
We don’t have enough evidence about how AI is even affecting the social fabric, how are children getting increasingly isolated with the adoption of AI… All of these stories are coming to us from the west, so what is it that’s happening in India?
This comment exposed a critical gap in the global AI discourse – the dominance of Western research and perspectives in understanding AI’s social impacts. It challenged the assumption that AI effects are universal and highlighted the need for region-specific research.
This comment brought urgent attention to research gaps and geographic bias in AI studies. It influenced the discussion to focus more on the need for diverse, locally-relevant research and evidence-gathering that reflects different cultural and economic contexts.
Speaker: Balaraman Ravindran
If your potential or probable outcome is the end of jobs, then you need to think about universal basic income… If what economists are saying is that 80% of the jobs will be transformed, then the policy outcome is training, skilling, reskilling… That’s why listening to economists and having the International Labor Organization and other institutions really follow closely what is happening is super important.
This comment demonstrated how different scientific assessments lead to fundamentally different policy responses. It showed the concrete policy implications of getting the evidence right, using employment impacts as a clear example of how scientific uncertainty translates to policy uncertainty.
This practical example crystallized the abstract discussion about evidence-based policy into concrete terms that all participants could understand. It reinforced the importance of accurate scientific assessment and influenced the closing discussions about operationalizing scientific insights.
Speaker: Anne Bouverot
Overall Assessment

These key comments collectively transformed what could have been a technical discussion about AI governance into a profound examination of how humanity makes collective decisions about transformative technologies. The discussion evolved through several phases: Guterres established the foundational principle that governance requires understanding; Bengio introduced the complexity of governing under uncertainty; Smith provided historical context and identified the problem-definition challenge; Swaminathan brought practical experience from pandemic response and highlighted equity concerns; Ravindran exposed research gaps and geographic bias; and Bouverot demonstrated concrete policy implications. Together, these insights created a rich, multi-layered conversation that moved beyond simple calls for regulation to explore the deeper challenges of building legitimate, effective, and inclusive global governance for emerging technologies. The discussion successfully bridged technical expertise, policy experience, and institutional wisdom to create a framework for thinking about AI governance that is both scientifically grounded and politically realistic.

Follow-up Questions
How can we develop standardized evaluation methodologies that work across different regulatory contexts for AI governance?
This is crucial for operationalizing high-level AI principles like transparency, accountability, fairness, and safety across different jurisdictions and ensuring interoperability.
Speaker: Josephine Teo
What are the psychological effects of chatbots on people, particularly in different cultural contexts?
Bengio noted that while there is anecdotal evidence, scientific studies are just beginning to emerge, and there’s a need to understand these effects across different regions and cultures.
Speaker: Yoshua Bengio
How is AI affecting children’s social isolation and mental health, particularly comparing urban vs rural contexts in India?
There’s concern about increasing isolation among children with AI adoption, but evidence is mostly coming from Western studies rather than Indian contexts with different cultural setups.
Speaker: Balaraman Ravindran
What benchmarks can evaluate the efficiency and effectiveness of AI models in agriculture in India?
As governments push for AI adoption in agriculture, there’s a need for India-specific benchmarks to assess these interventions and understand potential flaws.
Speaker: Balaraman Ravindran
What is the causal relationship between AI usage frequency and learning effectiveness in education?
Preliminary studies show effectiveness correlates with usage habits, but it’s unclear whether more usage leads to better effects or better effects lead to more usage.
Speaker: Balaraman Ravindran
How can we ensure AI governance frameworks account for diverse user contexts, such as low-income women farmers in remote areas versus large-scale farmers with advanced machinery?
Different contexts require different approaches, and governance must ensure all voices are heard, particularly from underrepresented groups who may use technology differently.
Speaker: Soumya Swaminathan
What are the specific impacts of AI on youth in different cultural and economic contexts?
While there are stories from the West about AI dependence among children and mentally challenged individuals, there’s insufficient evidence about what’s happening in India and other developing countries.
Speaker: Balaraman Ravindran
How can we develop technical frameworks for embedding governance through design (techno-legal approaches) for AI systems?
Building on India’s experience with digital public infrastructure, there’s a need to develop frameworks and technologies that embed governance directly into AI system design.
Speaker: Ajay Sood
What unanticipated risks and harms from AI have not yet been considered or studied?
As AI deployment accelerates, there may be unforeseen negative consequences that require proactive research and monitoring to identify and address.
Speaker: Soumya Swaminathan
How can we create and maintain global databases of scientific data that are accessible to researchers worldwide for AI-assisted scientific discovery?
To realize AI’s potential in accelerating scientific discovery, there’s a need for internationally funded and constructed scientific databases that are openly available.
Speaker: Anne Bouverot

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Smaller Footprint Bigger Impact Building Sustainable AI for the Future

Smaller Footprint Bigger Impact Building Sustainable AI for the Future

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on sustainable and resilient artificial intelligence development, particularly addressing AI’s growing energy consumption and environmental impact while ensuring global accessibility. The event, co-organized by France, UNESCO, and the Sustainable AI Coalition, brought together government officials and industry leaders to explore how AI can work efficiently and responsibly for both people and the planet.


French Minister Anne Le Henanf emphasized that sustainable AI is an imperative rather than an option, highlighting two critical challenges: AI’s rapidly growing energy demands that threaten to outpace green energy progress, and the fairness crisis where massive AI models create new divides by excluding resource-constrained regions. She announced the growth of the Sustainable AI Coalition from 90 to over 220 partners, focusing on three pillars: research, measurement through standardization, and action through policy implementation.


UNESCO’s Dr. Tafik Delassie argued that the next breakthrough in AI will come from building leaner, more resilient systems rather than larger models, noting that current AI inference consumes hundreds of gigawatt hours annually. He officially launched the Resilient AI Challenge, which aims to demonstrate how open-source AI models can be optimized and compressed while maintaining performance but significantly reducing energy consumption.


Industry representatives James Manyika from Google and Arthur Mensch from Mistral AI discussed practical approaches to efficiency, including mixture of experts architectures, model compression, and the importance of open-source models to reduce redundant training costs. They emphasized that business incentives align with sustainability goals since energy constraints make efficiency commercially essential.


Government representatives from India and Kenya shared their approaches to balancing AI development with sustainability concerns, focusing on renewable energy infrastructure and practical applications that solve real-world problems. The discussion concluded with the announcement that the Resilient AI Challenge represents a crucial step toward making sustainable AI a global standard rather than an aspiration.


Keypoints

Major Discussion Points:

Sustainable AI as a Global Imperative: The discussion emphasized that sustainable AI is not optional but essential, driven by AI’s rapidly growing energy demands that threaten to outpace green energy progress and create new digital divides between resource-rich and resource-poor regions.


Technical Solutions for AI Efficiency: Panelists explored various approaches to reduce AI’s environmental impact, including model compression, sparse mixture of experts architectures, task-specific optimization, improved inference frameworks, and the strategic use of smaller models rather than pursuing ever-larger parameter counts.


Business Alignment with Environmental Goals: A key theme was that commercial interests are naturally aligning with sustainability objectives, as energy constraints and cost pressures make efficiency a competitive advantage, with AI companies essentially becoming utility companies that convert electricity into tokens.


Government and Policy Role: Discussion covered how governments can accelerate sustainable AI through public procurement requirements, investment in renewable energy infrastructure (including small modular reactors and fusion research), support for open-source projects, and development of environmental standards for AI systems.


The Resilient AI Challenge Initiative: The announcement and promotion of a global challenge to advance compressed, energy-efficient AI models, representing a concrete step from principles to action in making AI more sustainable and accessible worldwide.


Overall Purpose:

The discussion aimed to address the critical challenge of making AI development environmentally sustainable while maintaining accessibility and performance. The event served to launch the Resilient AI Challenge and build international cooperation around sustainable AI practices, moving from theoretical frameworks to practical implementation strategies.


Overall Tone:

The tone was consistently optimistic and collaborative throughout the discussion. Speakers maintained a solution-oriented approach, emphasizing opportunities rather than dwelling on problems. The conversation was highly technical yet accessible, with panelists building on each other’s points constructively. There was a strong sense of urgency balanced with confidence that the challenges are solvable through international cooperation, technological innovation, and aligned business incentives. The tone remained professional and forward-looking from start to finish, with no significant shifts in mood or approach.


Speakers

Speakers from the provided list:


Speaker 1: Event moderator/host (role inferred from context)


Anne Le Henanf: France Minister Delegate for AI and Digitalization Affairs


Dr. Tafik Delassie: Assistant Director General for Communication and Technology Sector at UNESCO


Anne Bouvreau: Special Envoy on AI for France, panel moderator


Ambassador Philip Tigo: Ambassador and Tech Envoy for Kenya


Arthur Mensch: CEO of Mistral AI


Abhishek Singh: Lead organizer of the AI Impact Summit, represents India AI mission


James Manyika: Senior Vice President, Google Alphabet


Additional speakers:


None – all speakers mentioned in the transcript were included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on sustainable and resilient artificial intelligence development brought together government officials, industry leaders, and international organisations to address one of the most pressing challenges in AI: balancing rapid technological advancement with environmental responsibility and global accessibility. The event featured Anne Bouvreau, Special Envoy on AI for France, as moderator, alongside speakers including Dr. Tafik Delassie, Assistant Director General for Communication and Technology Sector at UNESCO, industry leaders from Google and Mistral AI, and government representatives from India and Kenya.


The Imperative for Sustainable AI

French Minister Anne Le Henanf established the foundational premise that sustainable AI has become an absolute imperative, articulating two critical challenges: AI’s energy demands are growing at a pace that threatens to outpace green energy progress, and the development of massive AI models without sustainability considerations is creating new digital divides that exclude regions and communities lacking adequate resources.


The Minister announced the Sustainable AI Coalition’s remarkable growth from 90 initial partners to over 220 partners, including technology firms, startups, utilities, NGOs, and research institutions. The Coalition’s three-pillar approach—research, measurement through standardisation, and action through policy implementation—provides a comprehensive framework for addressing sustainability challenges across the AI development lifecycle.


Redefining AI Progress: From Scale to Resilience

Dr. Tafik Delassie from UNESCO introduced a paradigm-shifting perspective that fundamentally challenged the prevailing industry narrative about AI advancement. His provocative question—”What if the next breakthrough in AI is not about building other larger models, but about building leaner, more resilient systems?”—reframed the entire discussion from defensive justifications of AI’s energy consumption to proactive strategies for efficiency-driven innovation.


Dr. Delassie provided stark context for the urgency of this shift, noting that current AI inference already consumes hundreds of gigawatt hours annually, with projections suggesting AI could account for 3% of worldwide electricity production by 2030 according to the International Energy Agency. His particularly striking comparison—that a single large AI model can consume over 1,000 megawatt hours of electricity, enough to power villages across India for an entire year—transformed abstract energy consumption figures into tangible inequity concerns.


The launch of the Resilient AI Challenge represents a concrete manifestation of this efficiency-first philosophy. Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking. Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency, generating clear evidence for the viability of compressed, optimised AI systems.


Industry Perspectives: Technical Innovation and Strategic Partnerships

James Manyika from Google emphasised that the company’s approach covers the entire performance-efficiency frontier through their Gemini model family, which ranges from high-performance Pro models to highly efficient Flash models. Significantly, he noted that the industry has moved away from focusing on parameter count, with companies now pursuing mixture of experts architectures where only a fraction of the model is activated for any given task.


Google’s commitment to sustainability extends to infrastructure investments, with Manyika outlining the company’s goal of achieving 24-7 carbon-free operations by 2030-2035, supported by investments in nuclear, geothermal, hydro, wind, and solar energy. He also highlighted their development of inference-specific TPU chips designed for efficient model deployment and their contribution to fusion energy research through AI-assisted plasma containment.


Manyika announced that 23 GEMA models are now available on India’s AI Kosh platform, demonstrating concrete international collaboration in making efficient AI models accessible globally.


Arthur Mensch from Mistral AI provided complementary insights, emphasising that sparse mixture of experts models can activate as little as 5% of their parameters whilst maintaining performance, dramatically reducing computational requirements. His observation that “being an AI company is turning into being a utility company” proved particularly influential, explaining why efficiency isn’t just an environmental consideration but an economic necessity in an increasingly competitive market with thinning margins.


Mensch highlighted the strategic advantage of training models in regions with clean energy, specifically mentioning training in France (nuclear energy) and Sweden (hydro energy) for lower carbon intensity. He also argued that open-source model development prevents duplication of carbon-intensive training efforts across multiple organisations, amortising environmental costs across the entire ecosystem whilst enabling broader innovation.


Government Strategies: Pragmatic Approaches to AI Development

Abhishek Singh from India’s AI mission articulated a distinctly pragmatic approach, explicitly stating that India is “not chasing the trillion parameter models” and is “not in the parameter game.” Instead, India focuses on developing solutions using current-level models that can address societal problems across healthcare, education, and agriculture sectors.


This approach is driven by practical considerations: when deploying AI at the scale India envisions—potentially serving hundreds of millions of users—the cost per inference becomes material, particularly for public sector applications funded by taxpayer money. Singh provided a concrete example of collaboration with the Ministry of Power to use AI for improving grid efficiency, achieving 10-15% reductions in transmission and distribution losses.


India’s infrastructure strategy includes recent policy changes opening the nuclear sector to private investment for small modular reactors and developing off-grid solutions to reduce load on existing electrical grids, recognising that massive AI deployment requires dedicated energy infrastructure.


Ambassador Philip Tigo from Kenya provided the perspective of an energy-constrained but renewable-rich developing nation. Kenya’s energy mix is already 95% renewable, incorporating geothermal, wind, water, solar, and hydro sources, providing a green foundation for AI development. However, he emphasised the need for realistic approaches to AI sovereignty, noting that whilst every country desires to control the entire AI technology stack domestically, practical sustainability considerations require strategic choices about which components to develop locally versus accessing through international collaboration.


Ambassador Tigo raised important points about expanding the definition of AI safety beyond model behaviour to include environmental concerns, and highlighted the need for user education—teaching people when to use AI versus traditional search methods—as sustainability requires behavioural change alongside technical optimisation.


International Cooperation and Standards Development

The discussion highlighted significant progress in international cooperation, including the announcement of the second version of the global approach on standardisation for AI environmental sustainability, published collaboratively by ITU, the Institute of Electrical and Electronics Engineers, and ESO. This work builds on foundations established through the UN Global Digital Compact and UN Environment Assembly resolution.


Technical partnerships are emerging across the ecosystem, with collaborations mentioned between Mistral, Google, Hugging Face, Alkosh, and Sarvam AI demonstrating how companies are working together to advance sustainable AI development. The Sustainable AI Coalition plans to launch AI research pitch sessions in 2026 to connect university projects with funding and industry partners, creating mechanisms for translating academic research into practical applications.


Convergence of Business and Environmental Interests

A particularly encouraging theme throughout the discussion was the natural alignment of commercial incentives with sustainability goals. Multiple speakers emphasised that energy constraints and cost pressures are making efficiency a competitive necessity rather than just an environmental consideration. As Arthur Mensch noted, the commoditisation of AI services means that companies with the most efficient operations will have significant competitive advantages.


This alignment extends to market access, as companies that can deploy effective AI solutions in resource-constrained environments—whether in developing countries or energy-limited regions of developed nations—can serve larger, more diverse markets whilst creating more inclusive business models.


Challenges and Future Directions

Despite strong consensus on core principles, several significant challenges remain. The tension between AI sovereignty aspirations and practical sustainability constraints requires ongoing negotiation, particularly for developing nations seeking to balance domestic capability development with environmental responsibility.


The need for comprehensive environmental impact assessment across different AI use cases and sectors remains largely unaddressed. As Ambassador Tigo noted, understanding environmental footprint requires deep analysis of specific applications, and current AI safety research has not adequately incorporated environmental concerns beyond model safety.


The fundamental test will be whether efficiency-focused development strategies can deliver at the scale required to meet growing inference demands from billions of users whilst maintaining performance standards and accessibility.


Conclusion: Practical Steps Toward Sustainable AI

This discussion marked a significant evolution in the global conversation about AI development, moving from defensive justifications of environmental impact to proactive strategies for efficiency-driven innovation. The strong consensus among diverse stakeholders—from major technology companies to developing nation governments—that sustainable AI is both necessary and achievable provides a foundation for coordinated global action.


The launch of the Resilient AI Challenge represents a concrete step from principles to implementation, providing a practical framework for demonstrating that sustainability and capability can advance together. The challenge’s focus on optimising open-source models for energy efficiency whilst maintaining performance will generate crucial evidence for the viability of efficiency-focused development approaches.


The discussion’s most significant contribution may be its reframing of sustainability from a constraint on AI development to a driver of innovation. By establishing that the future of AI will be defined by resilience rather than scale alone, and demonstrating that business interests naturally align with environmental goals due to energy constraints, the participants created a compelling case for why sustainable AI development is not just environmentally responsible but economically inevitable.


Success will be measured not just by the environmental efficiency of AI systems, but by their ability to deliver meaningful benefits to underserved communities and resource-constrained regions, ensuring that AI development truly serves people, planet, and prosperity in an integrated and sustainable manner. The collaborative partnerships, technical innovations, and policy frameworks discussed provide a roadmap for achieving these goals through coordinated international action.


Session transcriptComplete transcript of the session
Speaker 1

And this is what we will explore at this event. To introduce the topic, we will first have two distinguished speakers. First, I have the honor to welcome Mrs. Anne Le Henanf, France Minister Delegate for AI and Digitalization Affairs. Welcome, Madam Minister.

Anne Le Henanf

Excellencies, distinguished guests, ladies and gentlemen, it’s an honor to address you at Smaller Footprints, Bigger Impact, co -organized by France, UNESCO, and the Sustainable AI Coalition. This event is a continuation of the work co -chaired by India and France in preparation of this AI Impact Summit. putting resiliency, sustainability and efficiency at the heart of the global agenda. The question we face is no longer how can AI work for us, but how can we ensure AI works efficiently, responsibly and fairly for people and for our planet. Resilient and sustainable is the key to unlocking digital transformation, environmental protection and inclusive development. Sustainable AI is not an option, it’s an imperative. First, it’s an energy and environment imperative as governments decarbonize.

AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs are growing faster than supply. Second, it’s a fairness crisis. Massive AI models without sustainability create new divides and can exclude regions and communities lacking resources. That is why France, at the AI Action Summit, made sustainable AI a priority through the Sustainable AI Coalition, launched with UNEP, ITU and India as founding members. Our goal? Leverage AI to solve environmental challenges without exceeding planetary boundaries. From 90 initial partners, we have grown to over 220 million people. We are the first to have a sustainable AI. including tech firms, startups, utilities, NGOs, and research institutions backed by eight international organizations and 15 countries with the Netherlands joining this year.

Sustainable AI is now a global priority. Embedded in the UN Global Digital Compact and a UN Environment Assembly resolution. To turn vision to action, we focus on three pillars. First, research. In 2026, the coalition will launch AI research pitch sessions to connect university projects with funding and industry partners. Second, measurement. You can’t improve what you can’t measure. Today, I’m proud to announce on behalf of the coalition ITU, the Institute of Electrical and Electronics Engineers and ESO that we published the second version of the global approach on standardization for AI environmental sustainability to promote consistency in AI environmental sustainability standardization and third, action. France is implementing policies for low carbon efficient AI, powered by renewable energy hosted in green data centers and designed to be leaner and smarter this approach boosts competitiveness and discovery with minimal environmental costs that’s why as an AI Impact Summit outcome, India, France and UNESCO launched the Resilient AI Challenge, a global challenge to advance compressed, more energy -efficient AI models.

This initiative supports innovation aligned with our shared goals. Sustainable and resilient AI must be the global baseline. The only path to equitable development that services people and the planet. France and India have led this effort from Paris to New Delhi by focusing on people, planets and progress. Now we must deliver together. I look forward to our panelists’ insights and now invite to continue. Thank you.

Speaker 1

Thank you. Many thanks, Madam Minister, for this insightful introduction and the pioneering role of France in Sociable AI. I have now the pleasure to welcome Dr. Tafik Delassie, Assistant Director General for Communication and Technology Sector at UNESCO, whose landmark report on smaller models was published in July last year. Thank you.

Dr. Tafik Delassie

Madame la Ministre de l ‘IA du Numérique, Madame l ‘Envoyé Spécial pour l ‘IA, distinguished participants, esteemed colleagues, dear partners and ladies and gentlemen. I’m very pleased on behalf of UNESCO to be with you this afternoon for this important session. But allow me first to raise a question. What if the next breakthrough in AI is a breakthrough in AI? is not about building other larger models, but about building leaner, more resilient systems, systems that can solve whole world problems and real world constraints, including in low resource environments. Before turning to the resilient AI challenge, I would like to warmly thank the government of India for its leadership in convening this timely, strategic, and important forward -looking summit.

I also would like to acknowledge the co -chairs of the Working Group on Resilience, Innovation, and Efficiency, the Ministry of Power of India, and the Ministry of Ecological Transition of France for their strong commitment, engagement, and stewardship. My sincere thanks also go to our technical and ecosystem partners, including Mistral, Google, Hugging Face, Alkosh, Sarvam AI, and the broader Sustainable AI Coalition. alongside many academic experts who have contributed to this collective effort. UNESCO is proud to serve as a key knowledge partner for this initiative and to support the vision of India regarding AI that truly serves the people, the planet and prosperity. I would like to convey briefly three messages. First, the future of AI will not be defined by scale alone, but rather by resilience.

Second, resource -efficient AI is not a trade -off. It is a path to inclusion and access. Thirdly, delivering impact at scale requires global collaboration that is truly grounded in real -world validation. We are at a critical inflection point. Generative AI tools are now used by more than 1 billion people on a daily basis. Yet, behind every prompt lies a growing energy and resource footprint. Inference already amounts to hundreds of gigawatt hours per year, and this is comparable to the annual electricity use of millions of people in low -income countries. Training frontier models is even more energy intensive. A single large AI model can consume over 1 ,000 megawatt hours of electricity, enough to power villages across India for a whole year, placing increasing pressure on energy systems and reinforcing inequalities in access to compute and infrastructure.

These challenges are not theoretical. They are real. They directly affect whether AI can be deployed. In public services, also by small, medium -sized enterprises, the technology is used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a in rural health systems and low connectivity environments, both in developing countries but also in advanced economies facing growing energy constraints.

This is why the next breakthrough in AI will not come from building ever -larger models. It will come from building smarter, leaner, and more resilient systems that can deliver impact under energy constraints rather than exacerbate them. A proverb says, a good life is for everyone. It captures the spirit of living well together, in community, inclusively, and in harmony with our planet. In the same spirit, AI must be designed not only for those with the greatest computing power, but for all communities. It is everywhere around the world. The work of UNESCO shows that small but conscious design choices, such as model compression, task -specific architectures, and optimized inference can reduce AI energy consumption by up to 90 % without compromising performance.

Resilient AI is therefore not only greener, it is more inclusive, more affordable, and more adaptable. It lowers barriers for researchers, empowers local ecosystems, and enables AI solutions to reach communities too often left at the margins of the digital transformation. This brings me to why we are here today. It is my pleasure to officially announce the launch of the Resilient AI Challenge, which is a flagship initiative under the India AI Impact Summit Working Group on Resilience, innovation, and efficiency. This challenge moves us decisively from principles to action. It brings together model providers, researchers, startups, and academic teams to demonstrate how open -source AI models can be optimized, compressed, and deployed to achieve strong performance while significantly reducing the use of energy.

Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking. Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency, generating clear and actionable evidence. The winners of the challenge will be announced at the AI for Good Summit this coming July in Geneva, but the real success will be, of course, much broader than that.

Speaker 1

Thank you. before we delve into the panel I will invite the keynote speaker and the panelists to go up front for a picture now that we have the final line up and then we start the panel thank you Thank you. Thank you very much. Thank you very much. So now let me welcome our distinguished panelists and Mrs. Anne Bouvreau, Special Envoy on AI for France, moderator of this panel, to discuss how to make these models work and deploy in real life to the benefit of all. Thank you so much.

Anne Bouvreau

Thank you very much, Hélène. Thanks to the… the two keynote speeches that we just had first. Without further ado, I think what we want is to head into the discussion, so I will not make long introductions. I’m delighted to welcome our distinguished guests, James Manika, Senior Vice President, Google Alphabet, Arthur Mensch, CEO of Mistral AI, Abhishek Singh, lead organizer of this summit. A round of applause for him, please. Thank you. And Ambassador Philip Tigo, Ambassador and Tech Envoy for Kenya. Thank you. So the AI industry, according to the International Energy Agency, will probably consume 3 % of worldwide electricity production by 2030. This is not the end of the world, but this is a huge expansion.

The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. And therefore, there are environmental costs and impacts that we need to mitigate. AI, of course, at the same time also creates opportunity to optimize resources, including energy. So how can we ensure that AI’s development, in particular in developing countries but everywhere as well, is something that comes together with a focus on the planet? I’ll start with a question for Ambassador Philip Tigo. Let me turn to you first. You’re an attractive proponent of, active proponent of a more efficient and sustainable AI.

Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. We saw that with mobile phone payment. We saw that with other technologies. technologies. How is Kenya approaching efficient AI? What can you share with

Ambassador Philip Tigo

Thank you so much. And I’ll be very quick because I can see the ticker. There are a couple of things. One is that we’re very lucky as a country that our energy mix is already 95 percent. And we keep on investing into that. So we have geothermal, we have wind, we have water, we have solar, and we have hydro. So that’s the first kind of framework that we have, that it really must be green by design. The second part, of course, is that where the green comes in, it’s always not necessarily on the efficient data centers or how they’re energy efficient, but also on the use of it. So part of our green by design is also kind of wide scale of education around how people use these resources.

For example, you shouldn’t be looking for the next Starbucks, for example, when you’re using AI. You should really be using Google as an option. So people need to have those choices in their heads by design. The third part, of course, is protecting Kenya alone is not enough. You can put a green shield around the country. but AI is global. So the third part quickly is working in the international framework. So as you know, we worked with the Coalition for Sustainable AI to champion the first ever AI resolution environmental sustainability, and part of it had the four parts, right? The energy, the life cycle, the sustainability piece, but also the improving the set of the science to continue to understand the energy efficiency component of AI.

Anne Bouvreau

Excellent. Thank you so much. And we’ll try to keep this lively. My next question will be for James, for James Manika. Google is one of the key players, of course Mistral as well and Hugging Face, but you’re a key player in publishing transparent data on environmental impact of AI. And you develop both very large frontier models and also smaller, very efficient models. It’s the Gemini and the Gemma. Thank you. So I’ll start with the Gemini family. From a business and an engineering standpoint, I think it’s a very interesting family. Where is the real frontier? Is it scaling up or scaling down?

James Manyika

Well, thank you. Pleasure to be here at the summit with you, Anne. I think just to get to the question, we’re actually looking at this on multiple fronts. On the one hand, if you look at, for example, our Gemini models, it’s not one model. We have a whole model family, which starts with the Gemini Pro, goes to the Gemini Flash models, which are some of the most efficient models. So we’re trying to make sure with our models, our Gemini family, we cover the performance efficiency frontier of these models. You may have noticed that recently no one really talks a lot about model size. Remember, two, three years ago… It used to be the big craze.

It used to be the big question, how many are, how many parameters. And that’s because even with our Pro models, we’re now pursuing this mixture of experts’ architectures, where the activation of the model doesn’t activate. The entire model. No one activates the dense models anymore. People are activating and reactivating our mixture of experts. So on the Gemini models, we’re trying to cover the performance. performance and efficiency frontier. Then we also have our GEMA models. Our GEMA models are our most efficient open source, open weights models. In fact, here in India, on AI Kosh, which is the platform in India, we actually have on there 23 GEMA models. And that’s because we’ve optimized them for different sizes.

Some of them are efficient and run on a single GPU because we know that the needs on the edge, people want a variety of model choices. To make sure we drive efficiency. I’ll say two more quick things very quickly. Every year we focus on efficiency because it’s both from an energy point of view, from a computer efficiency point of view, even from a business standpoint, it’s the right thing to do. Because as you start to serve many more people, you want the most efficient systems. I’ll say one last thing finally, which is we are making probably extraordinary, probably the most investments of any… anybody into using green energy, clean energy for our energy, for our compute.

In fact, we’ve made this audacious goal that some point in the 2020, 2030, 2035 era, we want to be 24 -7 carbon free. So we’ve made investments in nuclear, in geothermal, we actually have several operational data centers in geothermal. We’re using hydro, we’re using wind and solar. So we’re making, we’re trying to get to a point where all our energy uses for our compute is carbon free. That’s our kind of our moonshot goal.

Anne Bouvreau

Excellent, thank you so much. I’d like to move to Archer Mensch, to Archer. Mistral is developing very large models, but really also being very good at high performance compact models. And I know your engineers and you as a co -founder and CEO also strongly believe in the environmental impact of AI and what can be done there. So what can you share with us on that and with your both business and engineering experience, where does model efficiency have the highest return? So I would say the second one. You can. Thank you. Wonderful. Tim Mark.

Arthur Mensch

So I would start with a couple of technical aspects. So to James’ point, the model size is indeed not only the only thing that we should be looking at. Effectively, we are using sparse mixture of experts because those are models which have a lot of parameters to store knowledge, but where you only activate 5 % of them. So that has been a key way of reducing the number of flops you do to generate one token, which is the one thing that matters for energy and therefore for carbon intensity. It’s one of the multipliers, actually. So the sparse… city matters and then you the other thing that matter is the systems on top I would say the the caching systems that you can put the way you’re managing the context so that you’re not reprocessing information and beyond just releasing the model weights that is something that we’ve always done we’re also heavy contributors to inference frameworks that are doing more and more advanced that are using more and more advanced technology to handle the caching systems in a way that where we are actually removing the wasteful computations that we used to do so it’s a it’s an algorithmic problem it’s actually very interesting it’s also a machine learning problem because depending on the request that you’re getting you can actually route the request to a small model or to a large model and so to James point it’s actually very important for any company doing models to actually have small models all the way to large models in particular because the large ones can be used to make specialized models after that so very important that’s an important point But I would say if you look at the carbon footprint today of artificial intelligence, because most of the GPUs are currently being used for training, I would say most of the weight comes from the fact that you have around 10 labs in the world that are training models that at the end look very similar.

And so for us, if I look at our biggest leverage there, the fact that we’ve been open sourcing models that are very large and we’ve been open sourcing our best models really, has been a major way of reducing the externality cost that you’re producing. Because we’re investing and it costs a lot of carbon to actually train a model, but then we give it for free to everyone else. And what that means is that people can build on top. And that’s amortized costs. Suddenly you don’t have 10 companies doing and training the same kind of models, but this thing is out there and you don’t need to reinvest. So I think that’s the big part. So that’s really on the training front.

And today training is the thing that takes most of the cost. when it comes to training. Now, when it comes to our own approach to sustainability, and I think I agree with James, one of the multipliers is the carbon intensity of your energy. And so there is a locality aspect to it, and we’ve been building our data centers and training our models recently. We’ve been training our models recently on our own hardware, which sits in France, which France is heavily nuclear, so the carbon intensity is low. Also 95%. Yes. Philippe, sorry. And in Sweden, it’s not 95%. Still very good, still very good. But in Sweden, and in Sweden where you have hydro. So choosing the locality is important because it’s one of the multipliers that you want to optimize for.

And finally, the one thing to worry about is, I mean, model size is one thing, carbon intensity is one thing, and then chips are also another thing. So being able to use the diversity of chips is huge. It’s super important. And we are in the… on using new kind of chips that are much more efficient from an energy perspective. Now to James’ point I would like to add the good thing about AI is that we are energy constrained and so suddenly it means that efficiency is actually driven by business. So I mean I would say transparency is super important for us and matters for our customers so we give, we’ve done like a very deep study on how that works and the carbon intensity of our training, we’ve done it with Mistral Large too with third party auditors etc.

But the business is also driving the, it’s also a reason why we’re going toward more efficient models because we don’t have enough energy, we need to have things that run on smaller hardware and it depends on the countries as well. Like there’s actually in the US the constraint is higher than in Europe and I think it’s going to be very high as well in Africa and in India and down the line. So it’s always good when business aligns Yes. Of course you can. And I think it would be valuable for public procurement in particular to put more pressure on sustainability as a way to accelerate the industry because that raises the stake and so that also pushes us toward more efficiency.

Anne Bouvreau

Wonderful. Thank you so much. I think that was really… Do you want to react quickly, James? No, no. Before we go to Abhishek?

James Manyika

I was going to agree with Arthur, but I’ll maybe add a couple more components. One of the things that is also important in this conversation is what you actually apply AI to. So there’s a whole range of applications of AI that actually are helpful for sustainability, grid management, managing with the adaptation and effects of climate change. And we’re seeing a lot of those kinds of applications at scale in ways that make an enormous difference to the sustainability question.

Arthur Mensch

So adding to that, you have agriculture as well where you have a lot of leverage. You have material science and chemistry. So we work with vertical AI companies to try and make that happen.

Anne Bouvreau

Great to see this. Thank you. I think we have a very high -quality exchange in this panel. Abhishek, I’d like to move to you and, yeah, and the microphone as well. And Archer actually introduced the fact that energy constraints are real, and they’re real in India, of course, and you have such a high population and wide market and also, of course, infrastructure constraints. How do you approach this? How does the AI mission in India approach this? And what are you doing on this front?

Abhishek Singh

the AI factories, with the hope that ultimately this investment will pay out. But when we ultimately look at how it will pay out, it will come out through inferencing. And we are doing inferencing at scale, ultimately users will have to pay. So until and unless you have focused on efficiency and sustainability, actual ROI on the investments will not work out. So it will be in the interest of everyone and only those players will survive who actually ensure that per token energy use is the minimal. So it will require innovation at multiple levels. It will require innovation at how do you do the algorithms, how do you do the inferencing, how do you use it. And therein, the value of small language models will come in.

While it’s fashionable to go for a trillion parameter model and more, but ultimately if you are building use cases in key sectors like healthcare or education or agriculture, you’ll need to go through smaller models which will be consuming less energy and which will be able to cost less. So sustainability is something that is given. So what we are doing, of course, in India… mission and in India is, number one, we are not chasing the trillion parameter models. We are not in the parameter game, number one. Number two, we are not even right now at the stage in which our companies are. I don’t think anyone of us is chasing AGI, which is like glamorized by some of the frontier AI models.

We are trying to think of what are the solutions which can be built by using current level of models which are available, which can solve societal problems in various sectors. To have real impact. Real impact. It’s a plug for you. Yeah, exactly. And when we do that, the cost per inference, the cost per query is something that becomes material because many of the public sector applications, especially in sectors like agriculture or healthcare, education, for some time will have to be funded by government, which will mean the taxpayers’ money. So we cannot be extravagant in doing that. So ensuring that the PUEs or data centers are lesser, ensuring that grid efficiency, we have, in fact, we are doing a project with the Ministry of Power, which I think finds a mention in the resilient inter…

committee’s report also, wherein we are using AI for improving grid efficiency, reducing the transmission distribution losses and what we have felt is that doing it smartly and using technology for doing that brings down the T &D losses by almost 10 to 15%. That’s again a big, big gain. So we’ll have to look at the entire ecosystem right from what kind of chips you are using for what, if you are doing inferencing do you need the high -end chip for doing that. So classifying it, having a very sector -specific application, specific use case basis approach for designing your systems will ultimately be where the game is and those who are able to do that will be able to build more sustainable systems their cost per query will be lesser and they will be able to survive.

So we, as government we are trying to enable this but ultimately I feel that business sense will ensure that sustainability comes in. We cannot be, it cannot be like that we can consume as much energy as we want unmindful of the ramifications. We have the funds and the VCs will pay only till a particular time. It cannot be forever.

Anne Bouvreau

Excellent. Thank you. We’re unpacking a number of things and we’re unpacking training from inference and utilization. We’re unpacking large models with smaller models and actually you need to get the larger models ideally through open source to be able to do the smaller ones. We’re looking at how AI can further then loop back and help optimize. We’ve heard a number of super interesting things. We started, you started a little bit on this Artur, but let me ask this question of everyone quickly. What, first of all, we also heard that business interests and commercial interests are aligned with the desire to make AI more sustainable which is a very hopeful message but what can governments and institutions do to further help improve this?

Artur, you hinted at public procurement. Do you want to say a few more words on this?

Arthur Mensch

Yes, it’s one of the ways in which we can build and make sure that efficiency is favored. Again, I think the market can solve it, but it can be accelerated, and the faster we can go, the better, because effectively we’re really building a lot of electricity at the moment for AI, and so if we can just make sure that efficiency is part of the consign, that’s good. It’s worth noting that for better or worse, artificial intelligence, generatively, is turning into being a utility company. Being an AI company is turning into being a utility company, in that you’re basically turning electricity into tokens. It’s highly competitive, so that means the margins are getting, I would say, thinner, and which means that things are also getting price sensitive, and so when it comes to being price, when things get price sensitive, efficiency really matters.

So that’s going to be partially solved, and that’s what we’re going to do. the market, but can be accelerated. And I’d say the way it can also lead the way is probably by sustaining open source projects that actually go beyond the models. The inference path, what we call agent harnessing, is also something that will eventually become common goods and can be used everywhere. And so good practices, incentivizing research as well, because the domain of routing, picking the right models, the domain of distillation, those models do not require you to have thousands of GPUs. And so you can do efficient research, so public research on that domain is very much possible, and we’d love to see more of it.

So I guess that’s the three things that I can mention.

Anne Bouvreau

Wonderful. Thank you. James, do you want to add a few words on that?

James Manyika

Yeah, first of all, I agree with the three things that Arthur mentioned. I would add a couple more. One of the things that’s actually quite interesting is the more government can actually incentivize and encourage… Come on. to use off -grid solutions is super important because that takes the burden off the public infrastructure that affects citizens. And so, for example, we’re spending a lot of time thinking about off -grid solar, off -grid wind, and we’re thinking about geothermal. We’ve even built in our own small modular reactors. And we’re also investing, to Arthur’s point, in breakthrough research. One of the most exciting areas, by the way, which is not as far away as people think it is, is actually fusion energy.

So we’ve made some of the biggest investments in fusion energy. And, by the way, AI is actually helping us make that progress because one of the things you worry about with fusion energy is how do you do what’s called plasma containment, where you can actually hold these high -energy particles and contain them. And AI has actually helped us do that. So even the use of AI in breakthrough research like that is pretty important. I’ll say one other quick thing very quickly because it reinforces, I think, something that Arthur and actually the minister said, which is… Inference is going to turn… to be the most important thing in many respects, far more than the training part of this.

And we’ve actually started to invest in that. So, for example, we’ve actually built, you know, we have our own chips, TPUs. We use TPUs and GPUs. In TPUs, we’ve actually built some inference -specific TPUs just for inference, to be able to do inference even more efficiently than what you would typically do with a general kind of GPU.

Anne Bouvreau

Wonderful. Thank you. Ambassador Philip Tigo, what can you… Maybe you can take the microphone from a neighbor, and then I’ll ask Abhishek to conclude.

Ambassador Philip Tigo

No, very quick, because a lot of the solutions are for developed economies. I think we have to be a little bit realistic in terms of where emerging economies… I think, one, there’s a bigger question of sovereignty, right? And there are conversations around that. And there has to be trade -offs. Like, every country wants to have the entire stack in their country. So I think governments need to be very realistic around which parts of the stack they really want to keep in their country, especially if you have this… AI for green and green AI… conversation. I think the second part again is to look at, especially in emerging economies, is to look at sustainability across the stack.

So we may not have compute necessarily, but we have other parts of the stack. So how do you ensure that part of the training gets that done? The third part I think is to expand this definition of safety, because AI safety is very much around the models and not necessarily around the use and potential harms of the environment. I’ve not seen that research. So there could be an expansion of research around looking at AI safety, including environmental concerns. The other quick one, of course, is you can only know the environmental footprint from use cases, and it has to be specific. And these are deep dives, and I have a sense people need to invest in deep dives.

When I look at food systems, that’s an entire food system, so there’s potentially problems there if we do not necessarily have, and to my last point around the standards, we really have to invest in the standards. We’ve seen that in other electronics, right? So we need to see that. So everybody, everybody knows the kind of environmental standards that you do that, and that’s needs to be done at scale. Thank you so much.

Anne Bouvreau

Abhishek, what can governments do? You represent a government. You want the… It works?

Abhishek Singh

Governments are doing… Every government is conscious of this. In India, in fact, recently we did kind of focus on the small model reactors, which James mentioned, is that we came out with a new policy under which the sector has been opened up for the private sector also to invest. What we do believe is that as inferencing needs go up and India, when we are talking inferencing, we are talking inferencing at scale. Say if 100 billion or 200 billion in the first phase and up to ultimately 500 million and more, people start using these services and the kind of back -end infrastructure that we need will be huge, which will consume a lot of energy. So to reduce the load on the existing grid, we will need to think of off -grid solutions.

We will need to think of dedicated small modular reactors, which can power the air applications. the world over what we are seeing is the more and more AI adoption is going up, energy costs go up. And if energy cost goes up, ultimately for elected governments it doesn’t be so well. So it has to be thought of, the entire strategy has to be thought of, how do we balance the needs between having more efficient and more intense AI solutions with the needs for sustainability, with the needs of reducing the carbon footprint, because we are also a few years away from 2030 SDG, Sustainable Development Goals. So ultimately we need to balance the both, the need for having more efficient AI and the need for reducing the impact on environment.

Otherwise we can’t solve one problem and create another. So that’s again something the governments are concerned of and I think augmenting the renewable energy sources, solar, wind and nuclear, the fusion thing will be the way to go forward.

Anne Bouvreau

Yeah, thank you very much. I think this has been a fascinating discussion. The we can we heard from all of the panelists that the environmental impact of AI is not an afterthought. It’s actually front and center. It’s part of the competitive advantage. It’s part of what companies and governments think about. This is a very strong and positive message that I think we can all be reassured with. Let me just close by mentioning the Resilient AI Challenge that was mentioned at the beginning. Registrations close on March 15th. So please submit your solution. Please join me in thanking this wonderful panel. Thank you, everyone, for joining us today and really hoping to see you engage into this Resilient AI Challenge.

This is first at the international level working on improving research on compressed models. So one of the… solution and tool that was presented in the panel so we really encourage you to register so thank you so much to our panelists another round of applause thank you Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Anne Le Henanf
3 arguments90 words per minute490 words325 seconds
Argument 1
AI’s energy demands threaten to outpace green energy progress, creating an energy and environment crisis
EXPLANATION
Anne Le Henanf argues that as governments work to decarbonize, AI’s rapidly growing energy requirements pose a significant challenge that could undermine green energy progress. She frames this as both an energy and environmental imperative that must be addressed urgently.
EVIDENCE
Model providers face a stark reality where AI’s energy needs are growing faster than supply
MAJOR DISCUSSION POINT
Sustainable AI as a Global Imperative
Argument 2
Massive AI models without sustainability create new divides and exclude resource-lacking regions
EXPLANATION
The minister contends that large AI models developed without sustainability considerations exacerbate inequality by creating barriers for regions and communities that lack the necessary resources to access or deploy these technologies. This represents a fairness crisis in AI development.
EVIDENCE
Massive AI models without sustainability create new divides and can exclude regions and communities lacking resources
MAJOR DISCUSSION POINT
Sustainable AI as a Global Imperative
Argument 3
France launched the Sustainable AI Coalition with UNEP, ITU and India, growing from 90 to over 220 partners
EXPLANATION
Anne Le Henanf highlights France’s leadership in establishing the Sustainable AI Coalition as a concrete initiative to address sustainability challenges in AI. The coalition’s rapid growth demonstrates significant international interest and commitment to sustainable AI development.
EVIDENCE
From 90 initial partners, we have grown to over 220 million people, including tech firms, startups, utilities, NGOs, and research institutions backed by eight international organizations and 15 countries
MAJOR DISCUSSION POINT
Sustainable AI as a Global Imperative
D
Dr. Tafik Delassie
4 arguments153 words per minute985 words385 seconds
Argument 1
The future of AI will be defined by resilience rather than scale alone
EXPLANATION
Dr. Delassie argues that the next breakthrough in AI will not come from building ever-larger models, but from developing smarter, leaner, and more resilient systems. He suggests that the focus should shift from scale to efficiency and adaptability under real-world constraints.
EVIDENCE
What if the next breakthrough in AI is not about building other larger models, but about building leaner, more resilient systems that can solve real world problems and real world constraints, including in low resource environments
MAJOR DISCUSSION POINT
Sustainable AI as a Global Imperative
Argument 2
Small but conscious design choices can reduce AI energy consumption by up to 90% without compromising performance
EXPLANATION
UNESCO’s research demonstrates that specific technical approaches like model compression, task-specific architectures, and optimized inference can dramatically reduce energy consumption while maintaining effectiveness. This shows that sustainability and performance are not mutually exclusive.
EVIDENCE
The work of UNESCO shows that small but conscious design choices, such as model compression, task-specific architectures, and optimized inference can reduce AI energy consumption by up to 90% without compromising performance
MAJOR DISCUSSION POINT
Energy Efficiency and Model Optimization
AGREED WITH
Arthur Mensch, James Manyika
Argument 3
The Resilient AI Challenge focuses on optimizing open-source models for energy efficiency while maintaining performance
EXPLANATION
Dr. Delassie announces a flagship initiative that brings together various stakeholders to demonstrate how open-source AI models can be optimized and compressed for better energy efficiency. The challenge emphasizes transparency, fairness, and rigorous benchmarking rather than comparing entirely different models.
EVIDENCE
Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking. Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency
MAJOR DISCUSSION POINT
Open Source and Collaboration
AGREED WITH
Arthur Mensch
Argument 4
Resource-efficient AI enables deployment in public services, rural health systems, and low-connectivity environments
EXPLANATION
Dr. Delassie emphasizes that energy-efficient AI is not just about environmental benefits but also about accessibility and inclusion. By reducing resource requirements, AI can be deployed in underserved areas and developing countries that face infrastructure constraints.
EVIDENCE
These challenges directly affect whether AI can be deployed in public services, also by small, medium-sized enterprises, in rural health systems and low connectivity environments, both in developing countries but also in advanced economies facing growing energy constraints
MAJOR DISCUSSION POINT
Real-World Applications and Impact
A
Ambassador Philip Tigo
5 arguments206 words per minute583 words169 seconds
Argument 1
Kenya’s energy mix is already 95% renewable, providing a green foundation for AI development
EXPLANATION
Ambassador Tigo explains that Kenya has a significant advantage in sustainable AI development because the country’s energy infrastructure is already predominantly renewable. This includes diverse sources like geothermal, wind, water, solar, and hydro power, creating a green by design framework.
EVIDENCE
We’re very lucky as a country that our energy mix is already 95 percent renewable. So we have geothermal, we have wind, we have water, we have solar, and we have hydro
MAJOR DISCUSSION POINT
Sustainable AI as a Global Imperative
Argument 2
Education about efficient AI usage is crucial – users should make informed choices about when to use AI versus traditional search
EXPLANATION
The ambassador emphasizes that sustainability in AI is not just about technical efficiency but also about user behavior and education. He argues that people need to understand when it’s appropriate to use AI versus more traditional and less energy-intensive alternatives like regular search engines.
EVIDENCE
For example, you shouldn’t be looking for the next Starbucks, for example, when you’re using AI. You should really be using Google as an option. So people need to have those choices in their heads by design
MAJOR DISCUSSION POINT
Real-World Applications and Impact
Argument 3
Emerging economies need realistic approaches to AI sovereignty and should focus on specific parts of the technology stack
EXPLANATION
Ambassador Tigo argues that developing countries cannot realistically maintain the entire AI technology stack domestically and need to make strategic choices about which components to prioritize. This requires balancing sovereignty concerns with practical sustainability considerations.
EVIDENCE
Every country wants to have the entire stack in their country. So I think governments need to be very realistic around which parts of the stack they really want to keep in their country, especially if you have this AI for green and green AI conversation
MAJOR DISCUSSION POINT
Government Policy and Procurement
DISAGREED WITH
Abhishek Singh
Argument 4
AI safety definitions should expand to include environmental concerns beyond just model safety
EXPLANATION
The ambassador calls for broadening the concept of AI safety to encompass environmental impacts, not just the traditional focus on model behavior and potential harms. He notes that current AI safety research doesn’t adequately address environmental concerns.
EVIDENCE
AI safety is very much around the models and not necessarily around the use and potential harms of the environment. I’ve not seen that research. So there could be an expansion of research around looking at AI safety, including environmental concerns
MAJOR DISCUSSION POINT
Government Policy and Procurement
Argument 5
Environmental standards need to be developed and implemented at scale across the AI industry
EXPLANATION
Ambassador Tigo emphasizes the need for industry-wide environmental standards for AI systems, similar to what exists for other electronic devices. He argues that standardization is essential for ensuring consistent environmental practices across the AI ecosystem.
EVIDENCE
We’ve seen that in other electronics, right? So we need to see that. So everybody knows the kind of environmental standards that you do that, and that needs to be done at scale
MAJOR DISCUSSION POINT
Government Policy and Procurement
AGREED WITH
Arthur Mensch, Abhishek Singh
A
Arthur Mensch
6 arguments183 words per minute1190 words389 seconds
Argument 1
Sparse mixture of experts models activate only 5% of parameters, reducing computational requirements
EXPLANATION
Arthur Mensch explains that modern AI models use sparse mixture of experts architectures where only a small fraction of the model’s parameters are activated for any given task. This technical approach significantly reduces the computational work (flops) needed to generate each token, directly impacting energy consumption.
EVIDENCE
We are using sparse mixture of experts because those are models which have a lot of parameters to store knowledge, but where you only activate 5% of them. So that has been a key way of reducing the number of flops you do to generate one token
MAJOR DISCUSSION POINT
Energy Efficiency and Model Optimization
AGREED WITH
Dr. Tafik Delassie, James Manyika
Argument 2
Open sourcing large models reduces carbon externalities by preventing multiple companies from training similar models
EXPLANATION
Mensch argues that by open sourcing their best models, Mistral helps reduce the overall carbon footprint of the AI industry. Instead of multiple companies independently training similar large models (which is carbon-intensive), they can build upon Mistral’s open-source models, amortizing the training costs across the ecosystem.
EVIDENCE
We’re investing and it costs a lot of carbon to actually train a model, but then we give it for free to everyone else. And what that means is that people can build on top. And that’s amortized costs. Suddenly you don’t have 10 companies doing and training the same kind of models
MAJOR DISCUSSION POINT
Open Source and Collaboration
AGREED WITH
Dr. Tafik Delassie
Argument 3
Training location matters – France uses nuclear energy and Sweden uses hydro for lower carbon intensity
EXPLANATION
Mensch emphasizes that the geographic location of AI model training significantly impacts carbon footprint due to different energy sources. Mistral strategically chooses locations with low-carbon electricity sources like nuclear power in France and hydroelectric power in Sweden.
EVIDENCE
We’ve been training our models recently on our own hardware, which sits in France, which France is heavily nuclear, so the carbon intensity is low. And in Sweden where you have hydro
MAJOR DISCUSSION POINT
Infrastructure and Energy Solutions
Argument 4
Business interests align with sustainability as efficiency becomes crucial for competitive advantage
EXPLANATION
Mensch argues that AI companies are naturally incentivized to pursue efficiency because they are energy-constrained and operate in highly competitive markets with thin margins. This market dynamic means that sustainability and business success are aligned, making efficiency a competitive necessity.
EVIDENCE
Being an AI company is turning into being a utility company, in that you’re basically turning electricity into tokens. It’s highly competitive, so that means the margins are getting thinner, and which means that things are also getting price sensitive
MAJOR DISCUSSION POINT
Energy Efficiency and Model Optimization
AGREED WITH
Abhishek Singh, James Manyika
Argument 5
Governments should include sustainability criteria in public procurement to accelerate industry efficiency
EXPLANATION
Mensch suggests that government procurement policies can drive faster adoption of sustainable AI practices by including environmental criteria in their purchasing decisions. This would create market incentives for companies to prioritize efficiency and sustainability.
EVIDENCE
I think it would be valuable for public procurement in particular to put more pressure on sustainability as a way to accelerate the industry because that raises the stake and so that also pushes us toward more efficiency
MAJOR DISCUSSION POINT
Government Policy and Procurement
AGREED WITH
Ambassador Philip Tigo, Abhishek Singh
Argument 6
Public research should focus on model routing and distillation techniques that don’t require massive GPU resources
EXPLANATION
Mensch advocates for public investment in research areas like model routing and distillation that can improve AI efficiency without requiring enormous computational resources. These research domains are accessible to academic institutions and can contribute to the common good.
EVIDENCE
The domain of routing, picking the right models, the domain of distillation, those models do not require you to have thousands of GPUs. And so you can do efficient research, so public research on that domain is very much possible
MAJOR DISCUSSION POINT
Open Source and Collaboration
AGREED WITH
James Manyika, Abhishek Singh
A
Abhishek Singh
4 arguments193 words per minute907 words281 seconds
Argument 1
India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
EXPLANATION
Singh explains that India’s AI strategy deliberately avoids the race for ever-larger models and instead focuses on practical applications that can solve societal problems in sectors like healthcare, education, and agriculture. This approach prioritizes real-world impact over technical benchmarks.
EVIDENCE
We are not chasing the trillion parameter models. We are not in the parameter game. We are trying to think of what are the solutions which can be built by using current level of models which are available, which can solve societal problems in various sectors
MAJOR DISCUSSION POINT
Energy Efficiency and Model Optimization
DISAGREED WITH
Ambassador Philip Tigo
Argument 2
Cost per inference becomes material for public sector applications funded by taxpayer money
EXPLANATION
Singh argues that for government-funded AI applications, especially in sectors like agriculture, healthcare, and education, the cost per query becomes a critical factor since it’s funded by taxpayers. This economic reality drives the need for efficient, sustainable AI systems.
EVIDENCE
Many of the public sector applications, especially in sectors like agriculture or healthcare, education, for some time will have to be funded by government, which will mean the taxpayers’ money. So we cannot be extravagant in doing that
MAJOR DISCUSSION POINT
Real-World Applications and Impact
AGREED WITH
Arthur Mensch, James Manyika
Argument 3
AI can help optimize grid efficiency, reducing transmission and distribution losses by 10-15%
EXPLANATION
Singh describes a concrete example of AI being used to improve energy infrastructure itself. India is working with the Ministry of Power on a project that uses AI to reduce transmission and distribution losses in the electrical grid, demonstrating how AI can contribute to overall energy efficiency.
EVIDENCE
We are doing a project with the Ministry of Power, wherein we are using AI for improving grid efficiency, reducing the transmission distribution losses and what we have felt is that doing it smartly and using technology for doing that brings down the T&D losses by almost 10 to 15%
MAJOR DISCUSSION POINT
Infrastructure and Energy Solutions
AGREED WITH
James Manyika, Arthur Mensch
Argument 4
India is developing off-grid solutions and small modular reactors to reduce load on existing grid
EXPLANATION
Singh explains that as AI inference scales up to serve hundreds of millions of users in India, the energy requirements will be enormous. To address this, India is exploring off-grid solutions and small modular reactors specifically for AI applications, reducing pressure on the existing electrical grid.
EVIDENCE
Recently we did kind of focus on the small model reactors, which James mentioned, is that we came out with a new policy under which the sector has been opened up for the private sector also to invest. To reduce the load on the existing grid, we will need to think of off-grid solutions
MAJOR DISCUSSION POINT
Infrastructure and Energy Solutions
AGREED WITH
Arthur Mensch, Ambassador Philip Tigo
J
James Manyika
3 arguments176 words per minute827 words280 seconds
Argument 1
Google covers the performance-efficiency frontier with Gemini family models and uses mixture of experts architectures
EXPLANATION
Manyika explains that Google’s approach involves creating a family of models with different efficiency profiles rather than focusing solely on large models. The Gemini family includes both high-performance models and highly efficient Flash models, using mixture of experts architectures where only parts of the model are activated for each task.
EVIDENCE
We have a whole model family, which starts with the Gemini Pro, goes to the Gemini Flash models, which are some of the most efficient models. We’re pursuing this mixture of experts’ architectures, where the activation of the model doesn’t activate the entire model
MAJOR DISCUSSION POINT
Energy Efficiency and Model Optimization
AGREED WITH
Dr. Tafik Delassie, Arthur Mensch
Argument 2
Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations
EXPLANATION
Manyika describes Google’s comprehensive approach to clean energy, including investments across multiple renewable sources and even nuclear power. The company has set an ambitious goal of achieving 24-7 carbon-free energy for all its operations, including AI compute infrastructure.
EVIDENCE
We’ve made this audacious goal that some point in the 2020, 2030, 2035 era, we want to be 24-7 carbon free. So we’ve made investments in nuclear, in geothermal, we actually have several operational data centers in geothermal. We’re using hydro, we’re using wind and solar
MAJOR DISCUSSION POINT
Infrastructure and Energy Solutions
Argument 3
AI applications in grid management, climate adaptation, agriculture, and material science provide sustainability benefits
EXPLANATION
Manyika argues that while AI consumes energy, it also enables significant sustainability benefits through its applications. He highlights how AI is being used for grid management, climate change adaptation, and other applications that can have positive environmental impacts at scale.
EVIDENCE
There’s a whole range of applications of AI that actually are helpful for sustainability, grid management, managing with the adaptation and effects of climate change. And we’re seeing a lot of those kinds of applications at scale in ways that make an enormous difference to the sustainability question
MAJOR DISCUSSION POINT
Real-World Applications and Impact
AGREED WITH
Arthur Mensch, Abhishek Singh
A
Anne Bouvreau
3 arguments78 words per minute971 words738 seconds
Argument 1
The AI industry will consume 3% of worldwide electricity production by 2030, representing a huge expansion with environmental costs that need mitigation
EXPLANATION
Anne Bouvreau highlights the significant growth in AI’s energy consumption, noting that while 3% may not seem catastrophic, it represents a massive increase that requires attention to environmental impacts. She emphasizes the need to balance AI development with planetary considerations.
EVIDENCE
The AI industry, according to the International Energy Agency, will probably consume 3% of worldwide electricity production by 2030
MAJOR DISCUSSION POINT
Sustainable AI as a Global Imperative
Argument 2
AI creates opportunities to optimize resources and energy while also having environmental costs
EXPLANATION
Bouvreau presents a balanced view that AI development brings both challenges and solutions to environmental issues. She suggests that AI can be part of the solution for resource optimization, including energy efficiency, even as it consumes resources itself.
EVIDENCE
AI, of course, at the same time also creates opportunity to optimize resources, including energy
MAJOR DISCUSSION POINT
Real-World Applications and Impact
Argument 3
The Resilient AI Challenge represents a first-of-its-kind international initiative focused on improving research on compressed models
EXPLANATION
Bouvreau promotes the Resilient AI Challenge as a groundbreaking international effort specifically targeting the development of more efficient AI models through compression techniques. She encourages participation by highlighting its unique position as the first international challenge of this type.
EVIDENCE
This is first at the international level working on improving research on compressed models. Registrations close on March 15th
MAJOR DISCUSSION POINT
Open Source and Collaboration
S
Speaker 1
2 arguments67 words per minute190 words168 seconds
Argument 1
The event explores sustainable AI through collaboration between France, UNESCO, and the Sustainable AI Coalition
EXPLANATION
Speaker 1 introduces the event as an exploration of sustainable AI issues, emphasizing the collaborative nature involving multiple international organizations. The speaker frames this as part of ongoing work in preparation for the AI Impact Summit.
EVIDENCE
This event is a continuation of the work co-chaired by India and France in preparation of this AI Impact Summit, co-organized by France, UNESCO, and the Sustainable AI Coalition
MAJOR DISCUSSION POINT
Sustainable AI as a Global Imperative
Argument 2
UNESCO’s landmark report on smaller models represents pioneering research in sustainable AI
EXPLANATION
Speaker 1 highlights UNESCO’s contribution to the field through their research on smaller AI models, positioning this work as groundbreaking and influential in the sustainable AI movement. The timing of the report’s publication in July suggests recent and relevant research.
EVIDENCE
UNESCO, whose landmark report on smaller models was published in July last year
MAJOR DISCUSSION POINT
Energy Efficiency and Model Optimization
Agreements
Agreement Points
Business interests naturally align with sustainability goals in AI development
Speakers: Arthur Mensch, Abhishek Singh, James Manyika
Business interests align with sustainability as efficiency becomes crucial for competitive advantage Cost per inference becomes material for public sector applications funded by taxpayer money Google covers the performance-efficiency frontier with Gemini family models and uses mixture of experts architectures
All three speakers agree that economic incentives drive companies and governments toward more efficient AI systems, making sustainability a business necessity rather than just an environmental concern
Technical approaches can dramatically reduce AI energy consumption without sacrificing performance
Speakers: Dr. Tafik Delassie, Arthur Mensch, James Manyika
Small but conscious design choices can reduce AI energy consumption by up to 90% without compromising performance Sparse mixture of experts models activate only 5% of parameters, reducing computational requirements Google covers the performance-efficiency frontier with Gemini family models and uses mixture of experts architectures
There is strong consensus that specific technical solutions like model compression, mixture of experts architectures, and optimized inference can achieve significant energy savings while maintaining AI performance
Open source collaboration is essential for sustainable AI development
Speakers: Dr. Tafik Delassie, Arthur Mensch
The Resilient AI Challenge focuses on optimizing open-source models for energy efficiency while maintaining performance Open sourcing large models reduces carbon externalities by preventing multiple companies from training similar models
Both speakers emphasize that open source approaches prevent duplication of effort and carbon-intensive training while enabling broader access to efficient AI models
AI can contribute to environmental solutions while consuming energy
Speakers: James Manyika, Arthur Mensch, Abhishek Singh
AI applications in grid management, climate adaptation, agriculture, and material science provide sustainability benefits Public research should focus on model routing and distillation techniques that don’t require massive GPU resources AI can help optimize grid efficiency, reducing transmission and distribution losses by 10-15%
All speakers agree that AI’s environmental impact should be viewed holistically, considering both its energy consumption and its potential to optimize other systems and solve environmental challenges
Governments should actively promote sustainable AI through policy and procurement
Speakers: Arthur Mensch, Ambassador Philip Tigo, Abhishek Singh
Governments should include sustainability criteria in public procurement to accelerate industry efficiency Environmental standards need to be developed and implemented at scale across the AI industry India is developing off-grid solutions and small modular reactors to reduce load on existing grid
There is consensus that government intervention through procurement policies, standards, and infrastructure investment can accelerate the adoption of sustainable AI practices
Similar Viewpoints
Both speakers emphasize that the current trajectory of AI development focused on scale is unsustainable and needs to shift toward resilience and efficiency
Speakers: Anne Le Henanf, Dr. Tafik Delassie
AI’s energy demands threaten to outpace green energy progress, creating an energy and environment crisis The future of AI will be defined by resilience rather than scale alone
All three speakers agree that sustainable AI is fundamentally about inclusion and ensuring that AI benefits reach underserved communities and developing countries
Speakers: Anne Le Henanf, Dr. Tafik Delassie, Ambassador Philip Tigo
Massive AI models without sustainability create new divides and exclude resource-lacking regions Resource-efficient AI enables deployment in public services, rural health systems, and low-connectivity environments Emerging economies need realistic approaches to AI sovereignty and should focus on specific parts of the technology stack
All three speakers emphasize the critical importance of clean energy sources for AI infrastructure, including nuclear, renewable, and off-grid solutions
Speakers: James Manyika, Arthur Mensch, Abhishek Singh
Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations Training location matters – France uses nuclear energy and Sweden uses hydro for lower carbon intensity India is developing off-grid solutions and small modular reactors to reduce load on existing grid
Unexpected Consensus
Nuclear energy as a solution for AI’s energy needs
Speakers: James Manyika, Abhishek Singh
Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations India is developing off-grid solutions and small modular reactors to reduce load on existing grid
It’s notable that both a major tech company executive and a government official from different countries independently highlighted nuclear energy, including small modular reactors, as a key solution for AI’s energy demands. This suggests nuclear is gaining acceptance as a clean energy solution for AI infrastructure
AI safety should include environmental considerations
Speakers: Ambassador Philip Tigo
AI safety definitions should expand to include environmental concerns beyond just model safety
This represents an unexpected broadening of the AI safety discourse beyond traditional concerns about model behavior to include environmental impacts, suggesting a more holistic view of AI risks is emerging
User education is as important as technical efficiency
Speakers: Ambassador Philip Tigo
Education about efficient AI usage is crucial – users should make informed choices about when to use AI versus traditional search
While most discussion focused on technical and policy solutions, the emphasis on user education and behavioral change as a sustainability strategy was unexpected but represents an important dimension often overlooked in technical discussions
Overall Assessment

There is remarkably strong consensus among all speakers that sustainable AI development is both necessary and achievable through technical innovation, policy intervention, and international collaboration. Key areas of agreement include the alignment of business incentives with sustainability goals, the effectiveness of technical approaches like model compression and mixture of experts, the importance of open source collaboration, and the need for government leadership through procurement and standards.

Very high level of consensus with no significant disagreements identified. This strong alignment suggests that sustainable AI has moved from being a niche concern to a mainstream priority across industry, government, and international organizations. The implications are positive for coordinated global action on sustainable AI development, as stakeholders appear aligned on both the problems and potential solutions.

Differences
Different Viewpoints
Approach to AI sovereignty and technology stack control
Speakers: Ambassador Philip Tigo, Abhishek Singh
Emerging economies need realistic approaches to AI sovereignty and should focus on specific parts of the technology stack India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
Ambassador Tigo argues that countries need to be realistic about which parts of the AI stack they can control domestically and make strategic trade-offs, while Singh describes India’s approach of focusing on practical applications without chasing large models, suggesting different philosophies about national AI development strategies
Definition and scope of AI safety
Speakers: Ambassador Philip Tigo
AI safety definitions should expand to include environmental concerns beyond just model safety
Ambassador Tigo uniquely argues for expanding AI safety definitions to include environmental impacts, while other speakers focus on technical efficiency and sustainability without explicitly challenging current AI safety frameworks
Unexpected Differences
Role of business incentives versus government intervention
Speakers: Arthur Mensch, Ambassador Philip Tigo
Business interests align with sustainability as efficiency becomes crucial for competitive advantage Environmental standards need to be developed and implemented at scale across the AI industry
While both speakers support sustainability, Mensch expresses confidence that market forces will naturally drive efficiency due to competitive pressures, whereas Tigo emphasizes the need for regulatory standards, suggesting different levels of trust in market-driven solutions
Overall Assessment

The discussion shows remarkably high consensus on the importance of sustainable AI development, with disagreements primarily focused on implementation strategies rather than fundamental goals. Key areas of difference include approaches to AI sovereignty for developing countries, the optimal balance between market forces and regulatory intervention, and specific technical strategies for achieving efficiency.

Low to moderate disagreement level with high strategic alignment. The speakers demonstrate strong consensus on core principles (sustainability is essential, efficiency drives competitiveness, international cooperation is needed) but differ on tactical approaches. This suggests a mature discussion where fundamental disagreements have been resolved, leaving room for productive debate on implementation details. The implications are positive for sustainable AI development, as the alignment on goals provides a strong foundation for collaborative action despite tactical differences.

Partial Agreements
All speakers agree on the need for efficient AI models but disagree on the optimal approach – Mensch emphasizes open sourcing to reduce redundant training, Manyika focuses on creating model families with different efficiency profiles, while Singh advocates for avoiding large models entirely in favor of smaller, task-specific solutions
Speakers: Arthur Mensch, James Manyika, Abhishek Singh
Open sourcing large models reduces carbon externalities by preventing multiple companies from training similar models Google covers the performance-efficiency frontier with Gemini family models and uses mixture of experts architectures India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
All speakers agree on the importance of clean energy for AI infrastructure but pursue different strategies – Mensch focuses on strategic geographic placement, Manyika on comprehensive renewable energy investments, and Singh on off-grid solutions and small modular reactors
Speakers: Arthur Mensch, James Manyika, Abhishek Singh
Training location matters – France uses nuclear energy and Sweden uses hydro for lower carbon intensity Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations India is developing off-grid solutions and small modular reactors to reduce load on existing grid
Both speakers agree on the need for government intervention to drive sustainability but differ in approach – Mensch focuses on procurement policies as market incentives, while Tigo emphasizes the need for comprehensive industry-wide environmental standards
Speakers: Arthur Mensch, Ambassador Philip Tigo
Governments should include sustainability criteria in public procurement to accelerate industry efficiency Environmental standards need to be developed and implemented at scale across the AI industry
Takeaways
Key takeaways
Sustainable AI is now a global imperative, not an option, due to AI’s rapidly growing energy demands that threaten to outpace green energy progress The future of AI will be defined by resilience and efficiency rather than scale alone, with business interests naturally aligning with sustainability goals due to energy constraints Small, optimized AI models can reduce energy consumption by up to 90% without compromising performance, making AI more inclusive and accessible to resource-constrained regions Open source approaches significantly reduce carbon externalities by preventing multiple companies from training similar large models AI can create positive environmental impact through applications in grid management, agriculture, climate adaptation, and material science Energy efficiency is becoming a competitive advantage as AI companies essentially become utility companies converting electricity into tokens Government policy should include sustainability criteria in procurement and support off-grid renewable energy solutions for AI infrastructure
Resolutions and action items
Launch of the Resilient AI Challenge with registration deadline of March 15th to advance compressed, energy-efficient AI models Publication of the second version of global approach on standardization for AI environmental sustainability by ITU, IEEE, and ESO Coalition will launch AI research pitch sessions in 2026 to connect university projects with funding and industry partners Winners of the Resilient AI Challenge will be announced at the AI for Good Summit in July in Geneva India to continue developing off-grid solutions and small modular reactors to reduce grid load Continued investment in breakthrough research areas like fusion energy and inference-specific computing chips
Unresolved issues
How to balance AI sovereignty desires of emerging economies with practical sustainability constraints Lack of comprehensive research on AI safety that includes environmental concerns beyond just model safety Need for deep-dive studies on environmental footprints of specific AI use cases across different sectors Development and implementation of industry-wide environmental standards for AI systems How to effectively measure and standardize AI environmental impact across different applications and regions Scaling sustainable AI solutions while meeting growing inference demands from billions of users
Suggested compromises
Emerging economies should be realistic about which parts of the AI technology stack to keep domestically versus leveraging international resources Focus on sector-specific, use case-based approaches rather than pursuing general-purpose trillion-parameter models Balance the need for efficient AI solutions with environmental sustainability goals and 2030 SDG commitments Combine large frontier models with smaller, specialized models to optimize the performance-efficiency trade-off Use AI for both direct applications and to optimize energy systems, creating a positive feedback loop for sustainability
Thought Provoking Comments
What if the next breakthrough in AI is not about building other larger models, but about building leaner, more resilient systems, systems that can solve whole world problems and real world constraints, including in low resource environments.
This comment fundamentally reframes the AI development paradigm from a ‘bigger is better’ mentality to efficiency-focused innovation. It challenges the prevailing industry narrative about scaling and introduces the concept that true advancement might come through constraint-driven design rather than resource abundance.
This comment set the philosophical foundation for the entire panel discussion. It shifted the conversation from defensive justifications of AI’s energy consumption to proactive discussions about how efficiency can drive innovation. All subsequent speakers referenced this efficiency-first mindset, with James Manyika noting that ‘no one really talks about model size anymore’ and Arthur Mensch emphasizing sparse architectures.
Speaker: Dr. Tafik Delassie
A single large AI model can consume over 1,000 megawatt hours of electricity, enough to power villages across India for a whole year, placing increasing pressure on energy systems and reinforcing inequalities in access to compute and infrastructure.
This stark comparison makes the abstract concept of AI energy consumption tangible and morally urgent by connecting it to real-world inequality. It transforms a technical discussion into an ethical imperative by highlighting how AI development could exacerbate global disparities.
This comment introduced the equity dimension to the sustainability discussion, which became a recurring theme. Ambassador Philip Tigo later built on this by discussing sovereignty and trade-offs for emerging economies, while Abhishek Singh emphasized the need for cost-effective solutions for public sector applications funded by taxpayers.
Speaker: Dr. Tafik Delassie
Being an AI company is turning into being a utility company, in that you’re basically turning electricity into tokens. It’s highly competitive, so that means the margins are getting thinner, and which means that things are also getting price sensitive, and so when it comes to being price sensitive, efficiency really matters.
This analogy brilliantly captures the commoditization of AI and explains why sustainability isn’t just an ethical choice but an economic necessity. It reframes AI companies as infrastructure providers rather than tech innovators, which has profound implications for how the industry should be regulated and operated.
This comment provided the economic logic that unified the panel’s arguments. It explained why business interests align with sustainability goals, supporting James Manyika’s investments in green energy and Abhishek Singh’s focus on cost-per-query optimization. It shifted the discussion from ‘should we be sustainable?’ to ‘how do we compete through sustainability?’
Speaker: Arthur Mensch
We are not chasing the trillion parameter models. We are not in the parameter game… We are trying to think of what are the solutions which can be built by using current level of models which are available, which can solve societal problems in various sectors.
This represents a fundamentally different national AI strategy that prioritizes practical impact over technological prestige. It challenges the assumption that countries must compete in the ‘AI arms race’ and instead proposes a more pragmatic, application-focused approach.
This comment introduced a concrete alternative to the frontier model race, demonstrating how developing nations can participate meaningfully in AI without massive infrastructure investments. It influenced the discussion toward sector-specific applications and validated the panel’s focus on efficiency over scale, while also highlighting the importance of the Resilient AI Challenge for countries following this approach.
Speaker: Abhishek Singh
Every country wants to have the entire stack in their country. So I think governments need to be very realistic around which parts of the stack they really want to keep in their country, especially if you have this AI for green and green AI conversation.
This comment introduces the complex reality of AI sovereignty versus sustainability trade-offs that developing nations face. It challenges the assumption that every country should or can develop complete AI capabilities domestically and suggests strategic choices are necessary.
This comment added geopolitical nuance to the technical discussion, highlighting that sustainability strategies must account for national sovereignty concerns. It complemented Abhishek Singh’s practical approach by acknowledging the political realities that shape AI development strategies, and influenced the conversation toward collaborative rather than competitive approaches to AI development.
Speaker: Ambassador Philip Tigo
Overall Assessment

These key comments fundamentally transformed what could have been a superficial discussion about ‘green AI’ into a sophisticated analysis of how efficiency drives innovation, equity, and economic competitiveness. Dr. Delassie’s opening reframe established efficiency as the new frontier, while his inequality comparison added moral urgency. Arthur Mensch’s utility company analogy provided the economic logic that unified all arguments, explaining why sustainability is inevitable rather than optional. Abhishek Singh’s rejection of the parameter race offered a concrete alternative strategy, while Ambassador Tigo’s sovereignty concerns added necessary geopolitical realism. Together, these comments created a comprehensive framework showing that sustainable AI isn’t just environmentally responsible—it’s the most practical path to inclusive, economically viable AI development. The discussion evolved from defensive justifications to proactive strategies, from technical concerns to systemic solutions, and from competitive dynamics to collaborative imperatives.

Follow-up Questions
How can AI be deployed in public services, small and medium-sized enterprises, rural health systems and low connectivity environments in developing countries and advanced economies facing energy constraints?
This addresses the practical deployment challenges of AI in resource-constrained environments, which is crucial for ensuring equitable access to AI benefits globally.
Speaker: Dr. Tafik Delassie
How can we measure and standardize AI environmental sustainability across different implementations and use cases?
The Minister emphasized that ‘you can’t improve what you can’t measure’ and announced the second version of global standardization approaches, indicating ongoing need for better measurement frameworks.
Speaker: Anne Le Henanf
What are the specific environmental impacts and carbon footprints of AI use cases across different sectors like food systems?
He emphasized the need for deep dives into specific use cases to understand environmental footprints, noting that research on AI safety including environmental concerns is lacking.
Speaker: Ambassador Philip Tigo
How can AI safety frameworks be expanded to include environmental concerns beyond just model safety?
Current AI safety research focuses primarily on models rather than environmental harms from use, representing a gap in comprehensive safety assessment.
Speaker: Ambassador Philip Tigo
What are the optimal strategies for emerging economies to balance AI sovereignty with sustainability constraints?
He noted that every country wants the entire AI stack domestically, but this creates trade-offs with sustainability goals that need realistic government approaches.
Speaker: Ambassador Philip Tigo
How can public procurement policies be designed to incentivize AI efficiency and sustainability?
He suggested that government procurement requirements could accelerate market-driven efficiency improvements by making sustainability part of the procurement criteria.
Speaker: Arthur Mensch
What research is needed in model routing, distillation, and agent harnessing that doesn’t require massive computational resources?
He identified these as areas where efficient research can be conducted without thousands of GPUs, making them accessible for public research institutions.
Speaker: Arthur Mensch
How can off-grid energy solutions (solar, wind, geothermal, small modular reactors) be optimized specifically for AI infrastructure?
Both speakers emphasized the importance of off-grid solutions to reduce burden on public infrastructure, with specific mention of fusion energy research and small modular reactors for AI applications.
Speaker: James Manyika and Abhishek Singh
What are the optimal approaches for using AI to improve grid efficiency and reduce transmission and distribution losses at scale?
He mentioned a specific project showing 10-15% reduction in T&D losses, indicating potential for broader research and implementation of AI for grid optimization.
Speaker: Abhishek Singh

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling AI for Billions_ Building Digital Public Infrastructure

Scaling AI for Billions_ Building Digital Public Infrastructure

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion explored the intersection of artificial intelligence and cybersecurity, examining both how AI can enhance security measures and how AI systems themselves need protection. The conversation featured experts from government, private sector, and cybersecurity companies discussing the dual nature of AI as both an opportunity and a challenge for security professionals.


Daisy Chittilapilly from Cisco emphasized that AI follows the pattern of previous technologies in presenting both benefits and risks, noting that while AI promises better security management at machine scale, it also introduces new vulnerabilities like model jailbreaking and data poisoning. G. Narendra Nath highlighted the unprecedented speed of AI adoption compared to previous technological revolutions, creating challenges for both enterprises and national security, particularly because adversaries can leverage AI more effectively than defensive users.


A.S. Lakshminarayanan from Tata Communications warned that organizations are “running towards a cliff” by implementing AI on already fragile digital infrastructure, arguing that enterprises need an “AI operating system” with proper governance and trust layers rather than focusing solely on individual AI applications. Richard Marko stressed the importance of understanding AI agent operations and maintaining oversight of automated processes to ensure security.


The panelists agreed that AI is fundamentally changing cybersecurity from protecting systems and data to protecting decision-making and trust itself. They emphasized the need for new assessment frameworks, better capacity building across sectors, and a shift from viewing AI as merely an automation tool to recognizing it as a technology that scales decisions. The discussion concluded with expectations that AI-native companies will emerge to disrupt existing business models, while nations must balance AI adoption for competitive advantage with protection against its adverse effects.


Keypoints

Major Discussion Points:

AI’s Dual Role in Cybersecurity: The discussion extensively covered how AI serves both as a solution for cybersecurity challenges (enabling better threat detection and management at machine scale) and as a source of new risks (model poisoning, jailbreaking, data leakage, and sophisticated attacks by adversaries).


Infrastructure Fragility and Readiness Gap: Multiple panelists emphasized that current digital infrastructure is already fragile, and organizations are rushing to implement AI without adequate foundational security measures. There’s a significant gap between AI adoption ambitions and actual organizational readiness in terms of data strategy, compute capacity, and security frameworks.


Speed of AI Adoption vs. Security Preparedness: Unlike previous technological revolutions that allowed time for gradual adaptation, AI is being adopted at “breakneck speed” without sufficient time to understand and mitigate adversarial effects, creating unprecedented risks for enterprises and national infrastructure.


Need for New Frameworks and Governance: The discussion highlighted the necessity for new assessment frameworks, AI operating systems, and governance structures. This includes corporate AI responsibility, trust and verification mechanisms, and the evolution from traditional cybersecurity to protecting decision-making processes and maintaining system trust.


Strategic Transformation and Future Outlook: Panelists discussed how AI represents a paradigm shift from scaling transactions to scaling decisions, requiring new business models, talent approaches, and the emergence of AI-native companies that could disrupt existing industries within the next five years.


Overall Purpose:

The discussion aimed to explore the intersection of AI and cybersecurity from multiple perspectives – examining both how AI can enhance cybersecurity capabilities and how AI introduces new security challenges. The panel sought to address current readiness gaps, discuss strategic implications for enterprises and national infrastructure, and envision the future landscape of AI-enabled security.


Overall Tone:

The discussion maintained a balanced but cautionary tone throughout. While panelists acknowledged the tremendous opportunities AI presents for cybersecurity (describing it as creating a “level playing field” between attackers and defenders), there was a consistent undercurrent of concern about the pace of adoption outstripping security preparedness. The tone was professional and analytical, with experts sharing both optimistic possibilities and serious warnings about infrastructure fragility and the need for more thoughtful, systematic approaches to AI implementation.


Speakers

Speakers from the provided list:


Samrat Kishor – Moderator/Host of the discussion


Daisy Chittilapilly – Works at Cisco, focuses on AI and cybersecurity infrastructure, networking and security solutions


G. Narendra Nath – Government official working on national security and cybersecurity policy, involved with CERT India and cybersecurity frameworks


A. S. Lakshminarayanan – Executive at Tata Communications (TataCom), focuses on digital infrastructure and AI operating systems for enterprises


Richard Marko – Expert in cybersecurity resilience, AI risks, and digital security systems


Dharshan Shanthamurthy – Works with a cybersecurity company, provides consulting and thought leadership for large enterprises and government officials


Pradeep Sekar – Cybersecurity expert working with enterprise leaders and boards on AI risk management and strategic cybersecurity


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive panel discussion at a technology summit explored the complex intersection of artificial intelligence and cybersecurity, examining both the transformative opportunities and significant risks that AI presents to digital security infrastructure. The conversation brought together experts from government, telecommunications, cybersecurity, and technology consulting to address what moderator Samrat Kishor framed as the dual challenge of “AI for cybersecurity” and “cybersecurity for AI.”


The Dual Nature of AI in Cybersecurity

The discussion opened with Daisy Chittilapilly from Cisco establishing a fundamental framework for understanding AI’s role in cybersecurity. She emphasised that AI follows the pattern of previous technologies in presenting both opportunities and challenges, but noted its unique potential to redefine how humanity lives, works, and plays. Chittilapilly highlighted that cybersecurity has long struggled to manage threats at human scale, making AI’s promise of machine-scale security management particularly compelling. However, she cautioned that AI simultaneously introduces new vulnerabilities, including model jailbreaking, confidential information leakage, data poisoning, and inherent vulnerabilities in open-source models.


This dual nature became a recurring theme throughout the discussion, with Dharshan Shanthamurthy from the cybersecurity sector providing a particularly optimistic perspective. He argued that AI represents a historic opportunity to level the playing field between attackers and defenders, noting that cybersecurity has traditionally been asymmetric, with intruders needing to succeed only once whilst defenders must succeed consistently. Shanthamurthy described AI as enabling the identification of “needles in haystacks” and transforming security operations centres from requiring constant human monitoring to automated, agent-driven processes. He emphasised that India is positioned at a “sweet spot” between AI and cybersecurity, creating opportunities to develop world-class talent and capabilities.


Infrastructure Fragility and the Readiness Gap

A critical concern emerged around the fragility of existing digital infrastructure and organisations’ readiness for AI implementation. A.S. Lakshminarayanan from Tata Communications delivered perhaps the most sobering assessment, warning that organisations are “fast running towards the cliff” by implementing AI on already fragile digital foundations. He used a powerful metaphor, stating that enterprises “can’t build a skyscraper with a foundation of a bungalow,” which became a reference point throughout the discussion.


Lakshminarayanan detailed how AI will exponentially increase network traffic, particularly east-west traffic, through numerous API calls and long-lived sessions that will strain edge infrastructure. He argued that the excitement around AI has overshadowed the critical need to strengthen foundational digital infrastructure before adding AI capabilities.


This concern was reinforced by Chittilapilly’s reference to Cisco’s AI readiness research, which revealed significant gaps in enterprise preparedness. Despite widespread enthusiasm for AI deployment among Indian enterprises, substantial readiness challenges exist across data strategies, compute capacity, threat understanding, and innovation infrastructure.


Unprecedented Speed and Strategic Asymmetries

G. Narendra Nath, representing the national cybersecurity perspective, highlighted a crucial difference between AI and previous technological revolutions: the unprecedented speed of adoption. Unlike earlier technologies that allowed gradual integration and time to understand both beneficial and adversarial applications, AI adoption is occurring at “breakneck speed” with widespread enterprise willingness to adopt AI tools immediately.


Nath identified a critical strategic asymmetry: adversarial actors, including nation-states and malicious enterprises, are more motivated and focused in their AI implementation than defensive users. Whilst organisations adopt AI primarily for productivity and efficiency gains, adversaries dedicate significant effort and thought to leveraging AI more effectively for malicious purposes. This creates a dangerous imbalance that must be addressed through conscious effort and strategic planning.


He also noted technical challenges unique to AI systems, particularly the lack of separation between control and data planes that characterises traditional systems. In AI, data itself becomes control, making systems vulnerable to model poisoning through inputs and creating drift over time that makes system behaviour unpredictable and non-deterministic.


Risk Management Frameworks and Governance

Pradeep Sekar introduced a comprehensive framework for understanding AI risks through three distinct lenses: compliance risk (meeting regulatory requirements like the EU AI Act), operational risk (assessing model reliability and service provider dependencies), and strategic risk (understanding financial impact of AI-driven attacks on organisational reputation and customer relationships). He noted that whilst most organisations focus on compliance, few have progressed to strategic risk assessment, which requires quantifying potential financial impacts and communicating these to boards in business terms.


Sekar also argued that cybersecurity must evolve beyond protecting systems and data to protecting decision-making and trust. He introduced the concept of measurable trust through provenance, authenticity, and verification mechanisms, suggesting that future security systems will need to assess and rate the trustworthiness of transactions and communications in real-time.


The moderator suggested evolving from corporate social responsibility to “corporate AI responsibility,” where organisations take explicit ownership of their AI systems’ actions and impacts.


The Need for New Operating Systems and Architectures

Lakshminarayanan proposed the concept of an “AI operating system” as a comprehensive solution, arguing that organisations should move beyond evaluating individual large language models to implementing systematic frameworks with distinct layers: context (bringing together relevant information), agentic (enabling autonomous actions), and trust/governance (controlling what agents can and cannot do).


This architectural approach addresses the fundamental challenge of making AI knowledge actionable whilst maintaining control and oversight. Without such governance layers, organisations cannot safely leverage AI models’ capabilities or ensure responsible AI behaviour.


Chittilapilly provided insights into how AI is fundamentally changing infrastructure requirements, leading to a complete rewiring and restacking of enterprise infrastructure. The traditional approach of building security as bolt-on solutions is becoming obsolete, requiring instead system resilience built into all layers of both infrastructure and AI stacks.


Sectoral Challenges and National Perspectives

From a national perspective, Nath highlighted the uneven readiness across different sectors, creating both a “cybersecurity divide” and an “AI divide.” Whilst the financial sector has achieved relative maturity in cybersecurity practices, sectors like healthcare show similar enthusiasm for AI adoption despite significantly lower cybersecurity maturity. This creates new vulnerabilities in critical infrastructure.


He outlined government initiatives to address these challenges, including the development of assessment frameworks for AI systems, capacity building programmes, and leveraging existing institutional frameworks like CERT India and sectoral regulators. The use of regulatory sandboxes in financial and telecommunications sectors provides mechanisms for safely testing AI technologies before production deployment.


Human Factors and Implementation Challenges

Richard Marko brought attention to the human element in AI security, noting that people remain the weakest link in cybersecurity and that AI amplifies this vulnerability. He highlighted how AI-generated deep fakes and sophisticated social engineering make it increasingly difficult for humans to distinguish legitimate communications from scams. Additionally, AI agents performing tasks on behalf of users create new risks through potentially unsupervised actions.


The discussion also revealed talent challenges, with observations that younger professionals often adapt more readily to AI paradigms than experienced professionals, creating additional workforce considerations for organisations implementing AI systems.


Future Outlook and Strategic Implications

The panel identified AI as representing a paradigm shift from scaling transactions (the focus of previous technologies) to scaling decisions. This distinction requires organisations to rethink culture, talent development, and operational approaches. Lakshminarayanan argued that the decisions made in the next five years regarding AI strategy will determine organisational health for decades to come, particularly for his company’s infrastructure planning.


From a national perspective, Nath emphasised that AI adoption represents a competitive advantage that countries cannot afford to ignore, as other nations and enterprises will leverage AI for business improvement. However, this must be balanced with protection against AI’s adverse effects and careful management of dependencies created by AI integration into critical infrastructure.


Assessment and Governance Development

A significant gap identified throughout the discussion was the lack of adequate assessment frameworks for AI systems. Nath highlighted ongoing government efforts to develop frameworks that evaluate both security and functional aspects of AI systems before deployment, including the ability to distinguish between cybersecurity incidents and AI system malfunctioning or poor design.


Both government and private sector initiatives are developing these capabilities, with emphasis on making frameworks accessible to enterprises across sectors and providing clear guidance on security requirements and best practices.


Conclusion

The panel discussion revealed a complex landscape where AI’s transformative potential in cybersecurity is matched by significant implementation challenges and new risk categories. The consensus emerged that whilst AI offers unprecedented opportunities to improve security capabilities and level the playing field between attackers and defenders, successful implementation requires fundamental changes in infrastructure, governance, and strategic thinking.


The conversation highlighted the critical importance of building proper foundations before implementing AI capabilities, developing comprehensive governance frameworks, and addressing the speed mismatch between AI adoption and security preparedness. The discussion emphasised the need for coordinated action across sectors and between public and private stakeholders to address capacity building, assessment frameworks, and strategic risk management.


The panel concluded with recognition that India is well-positioned to develop world-class capabilities at the intersection of AI and cybersecurity, provided that proper attention is paid to building robust foundations and governance frameworks for this transformative technology.


Session transcriptComplete transcript of the session
Samrat Kishor

The context is, have you overdone it? Right? When we talk about AI and cybersecurity, these two areas, how do they come together? There’s AI for cybersecurity, and there is cybersecurity for AI. Right? So what we’re going to do is, we’re going to discuss both aspects. We’re going to at least try. So, you know, the first question, and I’d like to actually point it to Ms. Zazie, you know, what has changed, you know, if you were to look at the larger picture, the big picture, you know, in terms of AI coming into cybersecurity? What has changed?

Daisy Chittilapilly

I think as what happens with all technologies, and AI is no different in that sense. It is, of course, as we’ve been hearing over the last few days, a technology that will redefine humanity and how we live, work, play, all of that. But one thing that it hasn’t come, and with all of the other technologies, that have come before it is that it’s both an opportunity and a challenge. And it’s particularly true when it comes to the security space. So on one side, there is the promise that, you know, for some time now, with the advent of technologies, number of things getting connected, all of our lives going fidgetal, cyber threats have become, the landscape has, of course, expanded, and threats have become more and more complex and complicated.

And for some time now, we’ve not been able to manage cybersecurity at human scale. So machine scale was, you know, a lot of tooling was already in that space. So there is the promise with AI that you can manage security better. So there is definitely that opportunity. But at the same time, there is the recognition, like Dario Amadai said on the main stage yesterday, that his biggest concern and all of our concerns is that AI brings a set of risks, which not all of us have. And there are a lot of them that we know of at this point in time today. So both of these, so it’s also, like I said, that commonality is there with all technologies that came before it.

It is both a challenge and an opportunity and a challenge. Because we’ve got to protect models from being jailbroke. We’ve got to make sure that the models don’t leak our confidential information or poison our data. We’ve got to make sure that most of these are open source models, that they come with inherent vulnerabilities, so how do we detect them? So we’ve got to think about securing AI as well.

Samrat Kishor

Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to build applications. So earlier I think the perspective that changed was that we were looking at AI just at the application layer, but it’s gone much below in the infrastructure. It’s got embedded into the kind of systems which are now getting created and deployed. So we’re looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. We’re also looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI.

So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI. So, and that is where I’d like to bring in, Narendra, you, your perspectives on what are you seeing in terms of national security? You know, is it something which is giving us a spike, a blip, something which you can discuss, disclose here?

G. Narendra Nath

Yeah, I mean, it’s required to be discussed. That’s one thing that’s definite. No, one, you know, I take the points that you’ve said. One thing about all the other technological revolutions, as you said, is that, you know, there was a time frame over which that seeped into the system. Okay, and then we had time to look at how do I use it beneficially and also to look at the adversarial effects of it and how do I mitigate those things. Case of AI is that it’s really happening at a breakneck speed. And there’s also an adoption, a willingness to adopt into enterprises of the different AI tools that are there. So that is where the scary part is there.

And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it and they have got a lot of motivation to put in effort and thought process into how do I use it more effectively. Then the persons who are actually using AI for their own benefit, in terms of they are looking at how do I improve my productivity, how do I improve my efficiency, that’s the focus area that they are in. So this is where there is a disconnect and this has to be really bridged and that’s where the problem is.

The summit actually in one way it’s helping people become conscious about some of the measures that have to be taken. That is one part. The other is the difference between other systems and this is a little technical in the sense that in the other systems we have a separate control plane and a separate data plane. There we could actually control they provide access limits to the control plane. but here the data itself is the control so you have that poisoning of models happening through the inputs that are there so you could have a drift and over a period of time you will find that the model will not be behaving as you would expect it to behave and it’s not also very deterministic so there are challenges in how do I protect it now there’s AI systems to see that it gives me the consistent results after a period of time then there is also lack of clarity about what is the cyber security issue there and what is the issue of malfunctioning or a poor design of an AI system that lack of clarity also results in the challenges that are there.

I think these are the preliminary thoughts that I have. So at the national scale the issue is that when you have multiple entities at the enterprise scale and financial sector, the telecom sector and all of them and the power sector adopting AI the effect on compromises and the critical information machine infrastructure is something that would actually make us wake up and then see that what could be done. Those are issues that are there.

Samrat Kishor

Excellent pointers, sir. Excellent pointers. And I think since you brought in the private sector and the way they’ve evolved and they’re also subjected to these risks which are evolving in nature. I’d like to bring in Lakshmi, sir, from Tata here. So, sir, a lot of infrastructure is being built, connected, communicated using what you’re building for the nation. So, how are you seeing the paradigm shift from let’s say how it used to be before AI was commoditized and everyday technology. It used to be the labs. Now it’s out in everybody’s hands. So what is the change that you are seeing and the impact you’re seeing on critical infrastructure?

A. S. Lakshminarayanan

I don’t think people have woken up to the fact that they are fast running towards the cliff. Because I genuinely think that that the digital infrastructure in enterprises today are already fragile. And we know that from an enterprise security point of view, there are so many attacks that are happening. And we know that there are huge issues when it comes to, for example, now we more talk about IT, OT security, the operational technology in factories were never in the purview of IT security. And there are, you know, security today and digital infrastructure in general is still very fragile. It’s islands of different OEM technologies and many, many things. And, you know, I don’t want to, you know, it is a major issue.

Now, on top of this fragility, you add AI. And this fragility is going to be multiplied 100 times. It comes over, right, on many, many kinds of platforms. because AI is going to increase the network traffic, especially the east -west traffic, by, again, multifold. The number of API calls that somebody… And we all are saying, oh, I’ll embed AI at the edge of the device, and if I have a banking application, I’ll do that, but nobody has thought through. If I put an inference there, or if you put an inferencing at the edge, the number of API calls these have to do is tremendous, and these API calls are long -lived sessions. They’re not traditional API calls.

So the edge infrastructure is going to come under tremendous strain. So that’s why I’m saying that in all our excitement of AI, I’m very passionate and excited about AI, but I genuinely feel that people are not looking at the foundations and… …properly. So that is very fragile, and that is one point I want to make. The second point about this is I would like to expand the scope of this discussion. It’s not about AI and cybersecurity alone. It’s also about a broader trust question. I think we all know, you know, whether fake, the messages, you don’t know. Apply that in the enterprise context. And there was a talk about, you know, model drifts and so on and so forth.

So what we at Tadacom are doing, one is to protect the digital infrastructure through many, many things that we can do. And the unfortunate part is I don’t think enterprises have woken up to the fact that they have to do it. So I tell them that you can’t build a skyscraper with a foundation of a bungalow, which is what they’re trying to do. But when it comes to the drift and the trust part of it, I do believe that enterprises require, require an AI operating system. and what we mean by that AI operating system is something that brings the context together because LLMs will provide the knowledge. To make that knowledge into actionable intelligence, you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer which will control what an agent will do or will not do.

And if I take that control in my hands, and say that I will configure and ensure this agent will do something or not do something, I can make use of the models underneath a lot more intelligently. So I think rather than focusing on whether this LLM is good or that LLM is good and so on, this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly.

Samrat Kishor

Sir, that’s a great point. In fact, I was having a conversation a few days back, and I was saying that that from the time of corporate social responsibility, it’s time to evolve to corporate AI responsibility, where corporates start talking about how they’re controlling and owning the actions of the AI that they’re building and deploying. Great perspective, sir. Thank you very much. At this point, I’d like to bring in Richard to sort of continue the talk about digital infrastructure and resilience. So how has resilience in your perspective evolved when we talk about AI risks to cybersecurity and vice versa?

Richard Marko

Well, the question of resilience is a complex question. So I will bring a few aspects that I think are very important. So it is well understood that there are a lot of people who are in the industry and that people are typically the weakest link in cybersecurity. the reason is that we as human beings we were not evolved to deal with machines, computers and so on and most of us don’t have really deep technical knowledge about how systems work and so on so we are to a big extent dependent on relatively superficial understanding and so we are more easy to be tricked by different social engineering tricks and so on. Now with AI this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication I’m talking about deep fakes and so on so this is one aspect of the risk connected directly with people.

The other aspect is that we want AI to empower people to do things, more things and make them in an easier way so we have those agents and we give them some or we want to give them some commands like do this for me or that for me but we don’t understand all the steps that the agent will take on our behalf when performing those tasks and each of those tasks can be there can be a risk factor involved without us knowing like if you want to perform this action you will need to have those additional tools to achieve that and where you get those additional tools if AI decides on your behalf these are the tools you need software packages, whatever it is and they get to your computer without this being supervised then this is a problem so and we have to be very careful and we have to be very careful where I’m heading, like resilience here is really protecting or paying attention to details.

What is actually happening? What is running in the background? How are your commands transferred to the agents? Is there a possibility for them to be intercepted, to be modified? So it’s even it was difficult and complex even before advent of the new AI agentic approach. Now it’s becoming even more important to really go into all the details and we just heard from Lakshmi that he sees that we are moving towards a cliff. Well, depends on us of course. We want to go fast. We want to employ. We are all excited about AI but we maybe sometimes we need to slow down a little bit and make sure that the pieces are in the place and cyber security is not overlooked.

Samrat Kishor

Excellent, excellent perspectives. And I think an offshoot to that question can be to Ms. Daisy, which is what are you seeing as changing when you’re talking about digital infrastructure and especially the connectivity which it needs because you’re at Cisco, right? And here is something which is connecting a lot of things to a lot of other things. So how are you seeing changes happening, especially when you talk about resilience and what’s going on inside digital infrastructure?

Daisy Chittilapilly

So I think Lakshmi touched on a very important point of the underlying, the fragility of the underlying infrastructure. And that is something that I want to reiterate. For the past few years, we’ve been publishing an AI readiness index. And the good news is that we are as ready as everybody else. The bad news is maybe we’re not as ready as we think we are, which is the point Lakshmi is making, right? 90 % of… just under 1 ,000 large enterprises that we spoke to in India want to deploy agents this year. Forty percent want that agent to work alongside a human being, but only about two -thirds of those enterprises really have a data layer, data strategy, a data platform, and a data governance strategy.

Only about one -fourth have the compute capacity they need. Only about one -third are able to understand AI threats and deal with them. And less than one -fifth have the innovation engine to think about building and scaling and maintaining AI applications and use cases. So clearly there is this ambition versus reality gap which we have to solve for. That’s not a problem as long as we all know that that’s where it is, and they were acutely aware of this issue. Thank you. the other thing is AI is essentially leading to what this means is that we are rewiring and restacking the enterprise it’s not just networks it’s compute it’s silicon I know you know at the national level silicon security is a conversation so all this resiliency which we used to build almost like a bolt on at the top and particularly we used to think of it only as cyber resiliency it’s a system resilience which is built into all layers of the infrastructure stack all layers of the AI stack and that’s why at Cisco since you asked me a network specific question we used to have we used to deal with connectivity largely as connectivity and now we know the persona of that end port that connects to an end device that might be doing an inferencing or it’s in the data center that persona has to be that it will be on one side it will be a switch or a router but on the other side it will also be a security defense point.

So this ability of building special grade of security appliances and putting them in various parts of the network is fast becoming an outdated idea. And the point we’ve got to do is we’ve got to break it into a number of virtual instances that can go wherever you want the security policy to be. So it becomes a very virtual distributed mesh rather than hardware. Yes, there will be hardware. I’m not saying it will go away. But this ability to infuse it into the fabric and networks tend to be the all pervasive fabrics. That’s the way at least at Cisco we’re thinking about it. So this domains of networking and security are crashing together. So secure networking is like the conversation in the network space particularly.

The other part about this is the performance requirement which also Lakshmi alluded to. AI will put pressure on the underlying infrastructure. In a way it’s an exponential technology. The demands it will create on its underlying layers is also exponential. So we’ve got to almost build a new category of technology. Silicon systems, applications, everything. A new category has to be built and we have to build it in new ways. You cannot build it in the ways of how we built it in the past. Applications is an interesting one. We used to give an input, expect the same output on the other side. But now if you are going to deploy AI models, this thing is probabilistic.

And I refer to it. So you want to get it to a degree of assistance so that you cannot expect in a financial application or a very important citizen service application, you give an input and the output has to be deterministic. But you’re using at the core of it a probabilistic technology. So that refinement also takes a whole lot of work. So it’s rethinking in ways in all layers. of the, from silicon to software to systems, you have to rethink everything. Every rule we have to rethink.

Samrat Kishor

Excellent, excellent. And since you brought in that perspective of rethinking, reimagining, and how we’re using AI in the operating system of the company, I’d like to bring in Darshan here. So Darshan, you do a great work, you do a lot of great work in creating thought leadership content as well as doing consulting work for very large companies. Of course, there are CXOs and a very highly ranked official of the government sitting here, but then what are the other six CXOs thinking about when it comes to AI? Is it still a compliance thing, or has it percolated into strategy?

Dharshan Shanthamurthy

First of all, thank you. I’ll probably add some context to whatever I’ve heard so far. So first of all, my views is any technology disruption brings in two emotions, right? So hope as well as fear. And I’m sure all the other panelists have rightfully covered the fear construct of AI in cyber safety. And rightly so. No disputing that truth. But there is a huge hope component from a cybersecurity company like for us because we are a hardcore deep tech cybersecurity company. I see a lot of opportunities. And we as a country, India, can also be, we are at the sweet spot between intersection between AI and cybersecurity. And this topic is very aptly crafted because I think it’s a huge opportunity for us to also utilize.

And I’ll tell you why. Cybersecurity has so far been a very asymmetric equation. The intruders have always had an advantage over the defenders or anyone who’s actually defending a network because they just need to get one thing right and we need to get everything right. So it’s always been asymmetric. But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack. For example, one classic use case. Can be an agent. security operations center. Because at the end of the day, if you have ever visited a security operations center, it is a 24 bar 7 someone, analyst looking at a screen and almost an inhuman job, so to speak.

But today, with AI, now you’ve got a level playing field because we’ve seen those kinds of use cases being deployed at our SOC, where even a shift handover is done by an agent. So a lot of real use cases. So I’m on the hope side. There’s a lot of opportunities that today we have. And second, in terms of talent, I mean, we have a lot of youngsters sitting in this room who are looking to grow. We have spoken so much about other services, other areas evaporating in terms of job opportunities. I think we can create the world’s cyber security talent combination with AI because cyber security and AI are not two different fields. They actually, cyber security needs AI and AI needs cyber security.

So I think we are at a very, very opportunistic opportune time for us. to really ride this wave and create world -class talent which can address. So now on the second part which you just spoke about, that’s what we are hearing at the CXOs globally since we deal with a lot of people in the payment ecosystem. CXOs obviously have the same construct of hope versus fear, right? So some are obviously being a CISO or a CIO. There is amount of fear that is also coming in because these are real problems, right? For example, deep fakes or spear phishing attacks have become more robust with AI. But one of the key things that we are trying to explain is that, yes, those are things that you need to address, no doubt, but can you also look at how you can take advantage of those AI?

And Lakshmi rightly pointed out, how do you have an AI operating system? Similarly, we talk about how you can have an AI security operating system, right? Which you should have a playbook on how to leverage AI rather than being on the defense player. So those are the… Those are my views. Samarath.

Samrat Kishor

Excellent, excellent views and thank you very much for those perspectives and I’m glad that I still see people coming in, you know, this is an interesting session and some people standing as well. So I would like to bring in Pradeep now. Pradeep, you know, as a follow on to what I just asked Darshan, here is something which is, you know, at the top and we’re saying, you know, while it is percolating into strategy a bit, do you think that we should have a dedicated function within an organization and what are you seeing currently not just in India but elsewhere as well?

Pradeep Sekar

Yeah, thank you for that. So probably adding on to what Dharshan said, right, I don’t mind the hope and the fear thing because being in cyber security space, both of them do add more to what we can do, right, for the industry as a whole, for the country as a whole, if you would. When we look at strategically, when we talk to, let’s say, leaders and boards at companies in India across the world, predominantly when the conversation is about AI, the topic goes towards innovation, competitiveness, and ability to bring in, let’s say, productivity gains, right? What often gets missed is that AI is quietly reshaping the risk equation within the enterprise, right? Now, cybersecurity, right, so can no longer be just about protecting systems and the data.

Now, don’t get me wrong, right? Cybersecurity is still needed to be able to identify all the systems within your enterprise, enterprise beyond the extended enterprise, as well as be able to protect the data that is on all of these systems. But it needs to evolve into something more, given the AI landscape, which is, I love how Lakshmi put it, right? It’s going to be about trust. So going forward, can cybersecurity, how can it evolve to start protecting decision -making and trust? Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verification. Now, all of these mechanisms are going to come in, in a way. that we are able to identify, measure, rate, risk, rank, and call out whether this particular transaction that you’re doing, whether it’s a payment approval or it’s an executive communication, is trustworthy or not.

And then accordingly, the agent of the system that’s allowing the transaction to go through allows it or not, right? So that’s something that we’re seeing, and AI in this context is a force multiplier, right, on both sides. For us as defenders, we are seeing, like Darshan said, how we are able to detect, identify threats at scale and speed that we have never seen before, right? And definitely, right, bringing in, again, it’s not going to, so if you ask, okay, is it going to completely revamp how we do and run SOCs? A little yes. It’s not going to replace all the analysts, but definitely in terms of certain tasks that we are doing, we already started seeing Microsoft with its security co -pilot, how it can automate tasks, right?

Like different agents doing different tasks, so we’re already starting to see that. Now, but in addition to that, it’s also helping attackers on the other side of the equation, which is it is industrializing disruption at scale. Think fishing. Think social engineering. Now, all of these manipulation, now it’s happening at an unprecedented scale. That’s going to continue, and you’re going to see it continue for, let’s say, the next few years because that’s where we’re headed in terms of air -aided phishing. I would say, yeah, definitely manipulation and how this is going to impact the industry as a whole. Now, I would say that’s pretty much how all of these, the shift is, the tectonic shift is happening, right, across.

So as I would say working with leaders and board members, we are looking at how to look at these risks and how to frame these risks, and here usually we see three lenses. So one is the compliance risk, which is am I complying with the EU AI Act, right? Am I complying with the TDPDP or other sectoral guidance? So it’s more of a check -in -the -box approach. Maybe helps with me in protecting against regulatory exposure, but not with systemic risk. like what Ms. Daisy was saying. The second angle which some companies have started to move towards is the operational risk, right? Where the boards are starting to ask, the models am I using? Is it reliable?

Is it safe? Is it trustworthy? And what is the risk if this particular model, a service provider who’s providing that model goes down? So that’s the operational risk angle that we’re seeing more of. The third angle which I think it’s very few companies doing today is probably the strategic risk angle, right? Where in being able to call out if I’m using this particular, if there’s an AI -driven attack, identity attack, right? That is reducing or impacting the reputation of my organization with my customers, what is my exposure in financial terms? Now these are questions that boards would start to need to ask because we are starting to ask those questions and get those questions from leaders in order to how are we able to measure those and how do you quantify risk in financial terms?

And be able to convey that to the board as well because that’s what at the end of the day boards are concerned in being able to. the stakeholders and shareholders.

Samrat Kishor

That’s great and those are some interesting lenses that you put to the whole conversation. Sir, I’d like to bring you in now from your vantage point. When we talk about India’s DPI, we are implementing AI into systems which cater to healthcare, which cater to telecom, across the citizen supply chain, if you will. So how do we make sure that the AI deployments that we’re doing and what capabilities do we have to make sure that these deployments are secure and they’re taking care of the risks that the fellow panelists highlighted?

G. Narendra Nath

The financial sector, for example, is mature. But let’s say, take the health sector. It’s not as mature as others. But if you look at the enthusiasm, for example, of the health sector to adopt AI, you’ll find that the level of enthusiasm is similar to what is there in the other sectors. So that is a big challenge. We’ve been engaging with the health sector, for example. We’ve had recent meetings also to say that, how do I improve the cybersecurity posture of that sector? So that’s a big challenge, actually. So we had this digital divide. We have a cybersecurity divide that’s there. And now we are going to have this AI divide that’s going to be there across enterprises in different sectors.

So that is a challenge that is required to be addressed. That, I think, is the capacity building part. And also coming up with frameworks where people have access to that framework and understand what is really required to be done. And you talked of assessment. When an enterprise is coming with the AI system, is it secure? Is it doing the work? Is it doing the work it’s supposed to do? So we don’t have those assessment frameworks now. So if you’re aware, you know, the testing and assessment part is an important part, and creating that infrastructure so that people could go and then test and assess, that is an important part. The department of DRD has come up with an ETI framework, if you’re aware of it.

Similarly, from our office also, we funded a project, and still it’s around a year back. It started in November of 2024, we funded the project for coming with an assessment framework for AI systems. So that one is the security aspect of that, and the other is, of course, the functional aspect, you know, that also. In the sense that somebody claims that this AI system does something. How do you actually assess that? So I think one is the capacity building part, and the other is the, you know, having the frameworks in place is good. One thing good about this country is that we have an institutional framework that’s been established, especially because of cybersecurity over the period of time, like we have got the CERT India and CIPC, or their institutional framework.

and also the sectoral regulators also come up with sandboxing regulations in the sense that if you want to try out something new, you have these regulations that helps you to try out something new. So I think these like in the financial sector, you have the RBI sandboxing, the telecom sector also this mechanism. So I think people start using these sandboxes to prove technologies, to prove applications, to prove use cases and that will help them to actually understand how it really works before they deploy in production. That I think would help going forward.

Samrat Kishor

Awesome. Thank you, sir. And I think it’s enlightening and enriching for all of us here to know your perspective especially what the government is doing towards it. I’d like to bring in Lakshmi, sir from Tata here for the next question. So, sir, if we reconvene five years from now here, what are we going to be talking about? What did we do? What did we get right?

A. S. Lakshminarayanan

I think this discussions are very healthy. whether AI with a positive lens or with a fear lens. I think we need to – I – on two comments. One is, you know, the question on assessment. We ourselves in TataCom, when we asked ourselves the question, where do you want to be five years from now? And I made a statement that the next five years will determine the health of the company for the next 50 years because the technologies are moving very fast. So for an assessment framework, we developed a framework ourselves. We studied a lot of material. We didn’t find something good. So we developed an assessment framework where on one axis we plotted the capability. You know, it includes talent.

It includes the platform, which is when I said, look, no point in doing individual use cases in an organization. How many use cases will you do? You need a platform approach, which is where we said AI operating system is required. So that is maturing. So one – on one axis. On one axis, we are going to plot the capability. So it’s talent, even culture. AI, I don’t know whether people have appreciated this is a very different paradigm even now in the discussions I see people talking about how AI can help automate things and do things faster no, that’s not what AI will do AI, you know while the previous technologies of cloud and internet have helped companies to scale transactions AI is going to scale decisions and when you’re scaling decisions you need to think of a different paradigm all together, and we are still talking in the old paradigm of what tasks can be automated and how it can be done so this is a new paradigm, so in the capability axis the culture dimensions would have to be thought through carefully and talent appropriate, I find some of the younger talent are more easy to train on AI than some of the older, unfortunately so I think the whole talent and capability equation is one axis, and we’re going to plot ourselves and the other axis is on the outcomes what outcomes do you really want to deliver with AI?

And there, you know, outcomes could be more on efficiency. Outcomes could be more on the revenue enhancement. Outcomes could be more on the trust and the customer satisfaction. All those outcomes need to be plotted. I must admit, we ourselves are somewhere in the lower quadrant, and I hope we as a company will move to the top quadrant. And that needs to be defined, and that needs to be visualized. And only then you can move towards that. And that’s what we’re driving the company towards. And all the platform development that we’re doing, strengthening our infrastructure for enterprises, and we’ve shared some of these assessments to our customers as well. So that is one. So I hope that most people would see themselves moving towards the top quadrant in five years’ time.

The second thing that I worried about in the context of strategy is, again, you know, when people talk about AI and strategy, and I think that’s something that’s been really important to me, and I think that’s something that’s been really important to me, And I believe that, like in the previous technology when we had Internet and cloud, there were new business models that came about. So we had intermediaries coming, the booking .coms and others who disintermediated many, many people, or fintechs who came and did things better than the larger banks. And only when the larger entities woke up to the fact that these people are going to eat their lunch. And that’s what happened in the previous wave of technology.

In the AI, I think similar disruption is waiting to happen. Don’t know where and when and what. But if a strategy does not think about that, as to what disruptions are going to happen, we would have missed the bus. So five years from now, I would expect a new class of companies who are AI native, who are out there going to disrupt the existing business model. So those are the two things I would expect in five years to happen.

Samrat Kishor

Fabulous. And sir, one last question to you. If you were to give me a call five years from now and say, Samrat, this is how… nation states have changed what would that be?

G. Narendra Nath

See one is AI and I have talked somewhere else also, adoption of AI is a competitive advantage so that’s why you have to adopt AI and you don’t have any other because there are other nations who are going to adopt, there are other enterprises going to adopt AI and they are going to try to look at how do I do business better so that way going down you will find that we would adopt AI and this conference is very good for that five is down the line the other is protecting yourself from the adverse effects of AI because it’s a very powerful tool and then it’s just a thought process but I think as pointed out just one year we have found that such a lot of development has happened we do not know where this is really going to lead us so the thing is for us to be on our toes and to actually look through that how is this technology going to affect the way we do business and how we run our countries and then also and then you know this development of capacity capability and identify the dependencies that we have when this technology is adopted and try to see that how do I mitigate the dangers of those dependencies this is where I think the thought process would be and this is where I think the road map for the next five years for us.

Samrat Kishor

Thank you, thank you very much sir and thank you all the panelists for taking time out and agreeing to do this for the audience I see the room is full and a lot of people waiting on the sides as well thank you all for paying attention please put your hands together for the esteemed panel that we have here together we have to conclude this panel only for the paucity of time otherwise we could have gone on thank you very much Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Daisy Chittilapilly
3 arguments165 words per minute1068 words386 seconds
Argument 1
AI presents both opportunities and challenges in cybersecurity, similar to previous technologies but at unprecedented scale and speed
EXPLANATION
AI offers the promise of managing cybersecurity at machine scale when human scale is no longer adequate, but simultaneously brings new risks including model jailbreaking, data leakage, poisoning, and inherent vulnerabilities in open source models. This dual nature is common to all technologies but is particularly pronounced with AI.
EVIDENCE
Examples include the need to protect models from being jailbroken, prevent confidential information leakage, avoid data poisoning, and detect vulnerabilities in open source models
MAJOR DISCUSSION POINT
The Dual Nature of AI in Cybersecurity
AGREED WITH
Dharshan Shanthamurthy, Pradeep Sekar
Argument 2
There’s an ambition versus reality gap – 90% of enterprises want to deploy AI agents, but only a fraction have adequate data strategy, compute capacity, or AI threat understanding
EXPLANATION
Based on Cisco’s AI readiness index survey of nearly 1,000 large enterprises in India, there’s a significant disconnect between AI deployment ambitions and actual preparedness. While most enterprises want to deploy agents, they lack fundamental infrastructure and capabilities.
EVIDENCE
Survey data showing 90% want to deploy agents, 40% want human-AI collaboration, but only two-thirds have data strategy, one-fourth have compute capacity, one-third can handle AI threats, and less than one-fifth have innovation engines for AI applications
MAJOR DISCUSSION POINT
Infrastructure Fragility and AI Implementation Challenges
AGREED WITH
A. S. Lakshminarayanan, G. Narendra Nath
Argument 3
Networks must integrate security as a distributed mesh rather than separate appliances, with secure networking becoming the primary focus
EXPLANATION
Traditional approaches of deploying separate security appliances in various network parts are becoming outdated. Instead, security must be infused into the network fabric as virtual distributed instances that can be deployed wherever security policies are needed.
EVIDENCE
The evolution from hardware-based security appliances to virtual instances that integrate into network infrastructure, with networking and security domains converging
MAJOR DISCUSSION POINT
Need for New Frameworks and Operating Systems
G
G. Narendra Nath
5 arguments186 words per minute1261 words405 seconds
Argument 1
AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions
EXPLANATION
Unlike previous technologies that had time frames for gradual adoption and risk assessment, AI is being rapidly adopted by enterprises without sufficient consideration of adversarial uses. This creates a dangerous disconnect where defenders focus on productivity while attackers invest heavily in malicious applications.
EVIDENCE
Comparison with previous technological revolutions that allowed time for beneficial use and risk mitigation, contrasted with current rapid AI adoption and willingness to integrate AI tools
MAJOR DISCUSSION POINT
The Dual Nature of AI in Cybersecurity
AGREED WITH
A. S. Lakshminarayanan, Richard Marko
DISAGREED WITH
Dharshan Shanthamurthy, A. S. Lakshminarayanan
Argument 2
AI systems lack the traditional separation between control plane and data plane, making them vulnerable to data poisoning and model drift
EXPLANATION
Traditional systems had separate control and data planes allowing access control to the control plane, but in AI systems the data itself becomes the control mechanism. This makes AI systems vulnerable to poisoning through inputs and can cause unpredictable model behavior over time.
EVIDENCE
Technical explanation of how traditional systems separated control and data planes versus AI systems where data serves as control, leading to model poisoning and drift issues
MAJOR DISCUSSION POINT
Infrastructure Fragility and AI Implementation Challenges
Argument 3
There’s a cybersecurity divide across sectors, with varying levels of maturity in adopting AI while maintaining security posture
EXPLANATION
Different sectors have varying levels of cybersecurity maturity, with financial sector being mature while health sector is less mature, yet all sectors show similar enthusiasm for AI adoption. This creates challenges in maintaining security across critical infrastructure.
EVIDENCE
Comparison between financial sector maturity and health sector immaturity, recent meetings with health sector about improving cybersecurity posture
MAJOR DISCUSSION POINT
Organizational Strategy and Risk Management
AGREED WITH
Daisy Chittilapilly, A. S. Lakshminarayanan
Argument 4
Assessment frameworks for AI systems are needed to evaluate both security and functional aspects before deployment
EXPLANATION
Currently there are no adequate frameworks to assess whether AI systems are secure and perform their intended functions. This infrastructure is essential for enterprises to test and validate AI systems before production deployment.
EVIDENCE
Reference to ETI framework from Department of DRD and a funded project from November 2024 for developing AI system assessment frameworks covering both security and functional aspects
MAJOR DISCUSSION POINT
Need for New Frameworks and Operating Systems
AGREED WITH
A. S. Lakshminarayanan, Pradeep Sekar
Argument 5
AI adoption is a competitive advantage at national level, but countries must also protect against adverse effects and identify dependencies
EXPLANATION
Nations must adopt AI to remain competitive as other countries and enterprises will gain advantages through AI adoption. However, they must simultaneously develop capabilities to protect against AI’s adverse effects and understand the dependencies created by AI adoption.
EVIDENCE
Recognition that other nations and enterprises adopting AI will create competitive pressures, emphasis on being prepared for unknown future developments in AI technology
MAJOR DISCUSSION POINT
Future Outlook and National Implications
A
A. S. Lakshminarayanan
4 arguments162 words per minute1324 words488 seconds
Argument 1
Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
EXPLANATION
Current enterprise digital infrastructure suffers from fragmentation across different OEM technologies and security vulnerabilities. AI implementation will dramatically increase network traffic, especially east-west traffic, and create numerous long-lived API sessions that will strain edge infrastructure beyond current capacity.
EVIDENCE
Examples of IT-OT security gaps in operational technology, islands of different OEM technologies, increased API calls from AI inferencing at edge devices, and long-lived sessions different from traditional API calls
MAJOR DISCUSSION POINT
Infrastructure Fragility and AI Implementation Challenges
AGREED WITH
Daisy Chittilapilly, G. Narendra Nath
DISAGREED WITH
Dharshan Shanthamurthy, G. Narendra Nath
Argument 2
Enterprises require an AI operating system with context, agentic, and trust/governance layers to control what agents can and cannot do
EXPLANATION
Rather than focusing on individual LLMs, enterprises need a comprehensive AI operating system that combines knowledge from LLMs with contextual information and governance controls. This system should include layers for context, agents, and trust/governance to ensure controlled and intelligent use of underlying models.
EVIDENCE
Explanation of how LLMs provide knowledge but need context layers for actionable intelligence, and trust/governance layers to configure agent permissions and restrictions
MAJOR DISCUSSION POINT
Need for New Frameworks and Operating Systems
AGREED WITH
G. Narendra Nath, Pradeep Sekar
Argument 3
AI represents a paradigm shift from scaling transactions to scaling decisions, requiring different cultural and talent approaches
EXPLANATION
Unlike previous technologies like cloud and internet that helped scale transactions, AI fundamentally scales decision-making processes. This requires organizations to think beyond task automation and adopt entirely new paradigms for culture, talent development, and business operations.
EVIDENCE
Distinction between previous technologies scaling transactions versus AI scaling decisions, observation that younger talent adapts more easily to AI than older employees, development of internal assessment frameworks plotting capability versus outcomes
MAJOR DISCUSSION POINT
Organizational Strategy and Risk Management
AGREED WITH
G. Narendra Nath, Richard Marko
Argument 4
The next five years will determine organizational health for the next 50 years, with AI-native companies potentially disrupting existing business models
EXPLANATION
Organizations face a critical period where AI adoption and strategy will determine long-term viability. Similar to how internet and cloud technologies enabled new intermediaries and fintechs to disrupt traditional businesses, AI-native companies are expected to emerge and challenge existing business models.
EVIDENCE
Historical examples of how internet and cloud enabled disruptors like booking.com and fintechs to challenge traditional industries, prediction of similar AI-driven disruption by new AI-native companies
MAJOR DISCUSSION POINT
Future Outlook and National Implications
D
Dharshan Shanthamurthy
1 argument174 words per minute590 words202 seconds
Argument 1
AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations
EXPLANATION
Cybersecurity has traditionally been asymmetric with attackers having advantages, but AI now provides defenders with capabilities to identify threats at scale and automate security operations. This represents a significant opportunity for cybersecurity improvement and talent development.
EVIDENCE
Example of AI-powered security operations centers where agents can perform tasks like shift handovers, and the ability to find ‘needles in haystacks’ for threat detection
MAJOR DISCUSSION POINT
The Dual Nature of AI in Cybersecurity
AGREED WITH
Daisy Chittilapilly, Pradeep Sekar
DISAGREED WITH
G. Narendra Nath, A. S. Lakshminarayanan
R
Richard Marko
2 arguments137 words per minute463 words201 seconds
Argument 1
People remain the weakest link in cybersecurity, and AI agents performing tasks on behalf of users create new risks through unsupervised actions
EXPLANATION
Humans are vulnerable to social engineering and lack deep technical understanding of systems, making them susceptible to AI-enhanced attacks like deepfakes. Additionally, AI agents acting on behalf of users may take steps or acquire tools without proper supervision, creating new attack vectors.
EVIDENCE
Examples of deepfakes making scam communications indistinguishable from real ones, and scenarios where AI agents might acquire additional tools or software packages without user awareness during task execution
MAJOR DISCUSSION POINT
Infrastructure Fragility and AI Implementation Challenges
Argument 2
Resilience requires attention to details and understanding of background processes, sometimes necessitating slowing down implementation
EXPLANATION
Building resilience in AI systems requires careful examination of how commands are transferred to agents, potential interception points, and what processes run in background. While there’s excitement about AI adoption, sometimes organizations need to slow down to ensure proper security measures are in place.
EVIDENCE
Emphasis on understanding command transfer mechanisms, interception possibilities, and background processes that may not be visible to users
MAJOR DISCUSSION POINT
Organizational Strategy and Risk Management
AGREED WITH
G. Narendra Nath, A. S. Lakshminarayanan
DISAGREED WITH
G. Narendra Nath, A. S. Lakshminarayanan, Dharshan Shanthamurthy
P
Pradeep Sekar
3 arguments177 words per minute829 words280 seconds
Argument 1
AI industrializes disruption at scale, particularly in phishing and social engineering attacks
EXPLANATION
While AI helps defenders with threat detection and automation, it simultaneously enables attackers to conduct manipulation and social engineering attacks at unprecedented scale and sophistication. This industrialization of attacks represents a significant escalation in threat landscape.
EVIDENCE
Examples of AI-aided phishing and social engineering attacks, reference to Microsoft Security Copilot for automated security tasks, and the trend toward industrialized manipulation
MAJOR DISCUSSION POINT
The Dual Nature of AI in Cybersecurity
AGREED WITH
Daisy Chittilapilly, Dharshan Shanthamurthy
Argument 2
Cybersecurity must evolve from protecting systems and data to protecting decision-making and trust through measurable mechanisms
EXPLANATION
Traditional cybersecurity focused on system and data protection, but AI requires evolution toward protecting decision-making processes and establishing measurable trust through provenance, authenticity, and verification mechanisms. This enables systems to assess and control transaction trustworthiness.
EVIDENCE
Examples of measuring trust through provenance, authenticity, and verification for transactions like payment approvals and executive communications
MAJOR DISCUSSION POINT
Need for New Frameworks and Operating Systems
AGREED WITH
A. S. Lakshminarayanan, G. Narendra Nath
Argument 3
Organizations need to move beyond compliance-focused approaches to address operational and strategic AI risks
EXPLANATION
Companies typically approach AI risk through three lenses: compliance (regulatory requirements), operational (model reliability and service continuity), and strategic (reputation and financial impact). Most focus only on compliance, but strategic risk assessment is crucial for understanding financial exposure from AI-driven attacks.
EVIDENCE
Three risk lenses: compliance with EU AI Act and TDPDP, operational risks of model reliability and service provider dependencies, and strategic risks including reputation damage and financial quantification of AI-driven identity attacks
MAJOR DISCUSSION POINT
Organizational Strategy and Risk Management
S
Samrat Kishor
1 argument175 words per minute1101 words375 seconds
Argument 1
Corporate AI responsibility should evolve from corporate social responsibility, with companies taking ownership of their AI actions
EXPLANATION
Just as companies developed corporate social responsibility frameworks, there’s now a need for corporate AI responsibility where organizations take accountability for controlling and owning the actions of AI systems they build and deploy.
MAJOR DISCUSSION POINT
Future Outlook and National Implications
Agreements
Agreement Points
Infrastructure fragility and inadequate preparation for AI implementation
Speakers: Daisy Chittilapilly, A. S. Lakshminarayanan, G. Narendra Nath
There’s an ambition versus reality gap – 90% of enterprises want to deploy AI agents, but only a fraction have adequate data strategy, compute capacity, or AI threat understanding Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls There’s a cybersecurity divide across sectors, with varying levels of maturity in adopting AI while maintaining security posture
All three speakers agree that current digital infrastructure is inadequately prepared for AI implementation, with significant gaps between AI adoption ambitions and actual readiness across enterprises and sectors
Need for comprehensive AI governance and operating systems
Speakers: A. S. Lakshminarayanan, G. Narendra Nath, Pradeep Sekar
Enterprises require an AI operating system with context, agentic, and trust/governance layers to control what agents can and cannot do Assessment frameworks for AI systems are needed to evaluate both security and functional aspects before deployment Cybersecurity must evolve from protecting systems and data to protecting decision-making and trust through measurable mechanisms
These speakers converge on the need for comprehensive frameworks and operating systems that go beyond traditional approaches to govern AI implementation with proper trust, assessment, and control mechanisms
AI as both opportunity and threat in cybersecurity
Speakers: Daisy Chittilapilly, Dharshan Shanthamurthy, Pradeep Sekar
AI presents both opportunities and challenges in cybersecurity, similar to previous technologies but at unprecedented scale and speed AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations AI industrializes disruption at scale, particularly in phishing and social engineering attacks
All speakers acknowledge AI’s dual nature in cybersecurity – providing powerful defensive capabilities while simultaneously enabling more sophisticated attacks
Rapid pace of AI adoption creates unique challenges
Speakers: G. Narendra Nath, A. S. Lakshminarayanan, Richard Marko
AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions AI represents a paradigm shift from scaling transactions to scaling decisions, requiring different cultural and talent approaches Resilience requires attention to details and understanding of background processes, sometimes necessitating slowing down implementation
These speakers agree that AI’s unprecedented speed of adoption creates unique challenges requiring careful consideration and sometimes deliberate slowing of implementation to ensure proper security measures
Similar Viewpoints
Both speakers from major technology companies (Cisco and Tata) emphasize the fundamental inadequacy of current infrastructure approaches and the need for integrated, distributed security solutions rather than traditional bolt-on security measures
Speakers: Daisy Chittilapilly, A. S. Lakshminarayanan
Networks must integrate security as a distributed mesh rather than separate appliances, with secure networking becoming the primary focus Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
Both speakers emphasize the strategic imperative of AI adoption while stressing the need for comprehensive risk management that goes beyond basic compliance to address operational and strategic concerns
Speakers: G. Narendra Nath, Pradeep Sekar
AI adoption is a competitive advantage at national level, but countries must also protect against adverse effects and identify dependencies Organizations need to move beyond compliance-focused approaches to address operational and strategic AI risks
Both speakers focus on the human vulnerability aspect of AI-enhanced attacks, emphasizing how AI amplifies existing human weaknesses in cybersecurity through more sophisticated social engineering and unsupervised agent actions
Speakers: Richard Marko, Pradeep Sekar
People remain the weakest link in cybersecurity, and AI agents performing tasks on behalf of users create new risks through unsupervised actions AI industrializes disruption at scale, particularly in phishing and social engineering attacks
Unexpected Consensus
Need to slow down AI implementation despite competitive pressures
Speakers: Richard Marko, A. S. Lakshminarayanan, G. Narendra Nath
Resilience requires attention to details and understanding of background processes, sometimes necessitating slowing down implementation The next five years will determine organizational health for the next 50 years, with AI-native companies potentially disrupting existing business models AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions
Despite the competitive pressures and transformative potential of AI, multiple speakers from different backgrounds (cybersecurity expert, telecom executive, and government official) converge on the counterintuitive need to deliberately slow down implementation to ensure proper foundations and security measures
Corporate responsibility evolution for AI era
Speakers: Samrat Kishor, A. S. Lakshminarayanan
Corporate AI responsibility should evolve from corporate social responsibility, with companies taking ownership of their AI actions Enterprises require an AI operating system with context, agentic, and trust/governance layers to control what agents can and cannot do
The moderator’s suggestion for corporate AI responsibility finds unexpected alignment with the industry executive’s technical proposal for AI governance systems, suggesting convergence between ethical frameworks and practical implementation needs
Overall Assessment

The speakers demonstrate remarkable consensus on key challenges: infrastructure inadequacy, the dual nature of AI in cybersecurity, need for comprehensive governance frameworks, and the unprecedented speed of AI adoption requiring careful management. Despite coming from different sectors (government, private industry, cybersecurity), they align on both the transformative potential and significant risks of AI implementation.

High level of consensus with strong implications for coordinated action. The agreement across diverse stakeholders suggests these challenges are universally recognized and require collaborative solutions spanning public-private partnerships, new regulatory frameworks, and fundamental rethinking of digital infrastructure approaches. The consensus particularly around slowing implementation despite competitive pressures indicates mature understanding of the risks involved.

Differences
Different Viewpoints
Pace of AI implementation versus security preparedness
Speakers: G. Narendra Nath, A. S. Lakshminarayanan, Richard Marko, Dharshan Shanthamurthy
AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls Resilience requires attention to details and understanding of background processes, sometimes necessitating slowing down implementation AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations
While Narendra Nath, Lakshminarayanan, and Marko emphasize the need to slow down AI implementation due to infrastructure fragility and security risks, Dharshan presents a more optimistic view focusing on AI’s opportunities for cybersecurity improvement
Focus on fear versus hope in AI cybersecurity discourse
Speakers: Dharshan Shanthamurthy, G. Narendra Nath, A. S. Lakshminarayanan
AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
Dharshan emphasizes the hopeful aspects and opportunities AI brings to cybersecurity, while Narendra Nath and Lakshminarayanan focus more on the risks and challenges that need immediate attention
Unexpected Differences
Role of human factors in AI security
Speakers: Richard Marko, Dharshan Shanthamurthy
People remain the weakest link in cybersecurity, and AI agents performing tasks on behalf of users create new risks through unsupervised actions AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations
While both speakers acknowledge AI’s impact on cybersecurity, Marko emphasizes increased human vulnerability and risks from AI agents acting without supervision, whereas Dharshan focuses on AI’s potential to automate and improve security operations, representing fundamentally different views on human-AI interaction in security contexts
Overall Assessment

The main areas of disagreement center around the appropriate pace of AI implementation, the balance between optimism and caution in AI adoption, and the role of human factors in AI security. While all speakers acknowledge both opportunities and risks in AI cybersecurity, they differ significantly in their emphasis and proposed approaches.

Moderate disagreement level with significant implications for AI policy and implementation strategies. The disagreements reflect different risk tolerances and implementation philosophies that could lead to substantially different approaches to AI governance and cybersecurity frameworks at organizational and national levels.

Partial Agreements
All speakers agree that current enterprise infrastructure and frameworks are inadequate for AI deployment, but they propose different solutions – Daisy focuses on infrastructure readiness gaps, Lakshminarayanan proposes AI operating systems, and Narendra Nath emphasizes assessment frameworks
Speakers: Daisy Chittilapilly, A. S. Lakshminarayanan, G. Narendra Nath
There’s an ambition versus reality gap – 90% of enterprises want to deploy AI agents, but only a fraction have adequate data strategy, compute capacity, or AI threat understanding Enterprises require an AI operating system with context, agentic, and trust/governance layers to control what agents can and cannot do Assessment frameworks for AI systems are needed to evaluate both security and functional aspects before deployment
All speakers agree that AI requires fundamental changes in how organizations approach technology and security, but they focus on different aspects – Daisy on network architecture, Pradeep on trust mechanisms, and Lakshminarayanan on organizational paradigms
Speakers: Daisy Chittilapilly, Pradeep Sekar, A. S. Lakshminarayanan
Networks must integrate security as a distributed mesh rather than separate appliances, with secure networking becoming the primary focus Cybersecurity must evolve from protecting systems and data to protecting decision-making and trust through measurable mechanisms AI represents a paradigm shift from scaling transactions to scaling decisions, requiring different cultural and talent approaches
Takeaways
Key takeaways
AI in cybersecurity presents a dual nature – offering opportunities for better threat detection and automated security operations while simultaneously creating new attack vectors and risks at unprecedented scale and speed Current digital infrastructure is fragile and unprepared for AI implementation, with most enterprises lacking adequate data strategy, compute capacity, and AI threat understanding despite ambitious deployment plans AI fundamentally changes cybersecurity from protecting systems and data to protecting decision-making and trust, requiring new frameworks that can measure and verify authenticity and provenance Organizations need AI operating systems with integrated context, agentic, and governance layers rather than implementing isolated AI use cases The paradigm shift involves scaling decisions rather than just transactions, requiring new cultural approaches, talent development, and risk management strategies India is positioned advantageously at the intersection of AI and cybersecurity to develop world-class talent and leverage emerging opportunities Assessment frameworks for AI systems are critically needed to evaluate both security and functional aspects before production deployment Corporate AI responsibility should emerge as a new standard, with organizations taking ownership of their AI systems’ actions and impacts
Resolutions and action items
Development of assessment frameworks for AI systems by government agencies (mentioned ongoing project funded by National Cyber Security Coordinator’s office starting November 2024) Implementation of AI operating systems with trust and governance layers by enterprises like TataComm Utilization of regulatory sandboxes (RBI, telecom sector) for testing AI technologies before production deployment Building secure networking infrastructure with distributed security mesh rather than separate appliances Capacity building initiatives across sectors, particularly in less mature sectors like healthcare Development of institutional frameworks leveraging existing cybersecurity infrastructure (CERT India, CIPC)
Unresolved issues
How to bridge the cybersecurity maturity divide across different sectors while maintaining uniform AI adoption enthusiasm Specific mechanisms for measuring and quantifying AI-related strategic risks in financial terms for board-level decision making How to balance the speed of AI adoption with necessary security precautions without losing competitive advantage Standardization of AI assessment frameworks across industries and regions Managing the talent gap between younger AI-native workers and experienced professionals in traditional paradigms Identifying and preparing for potential business model disruptions from AI-native companies Addressing the fundamental challenge of applying deterministic security requirements to probabilistic AI technologies Developing effective governance mechanisms for AI agents performing unsupervised tasks on behalf of users
Suggested compromises
Slowing down AI implementation when necessary to ensure proper security foundations are in place, despite competitive pressures Using regulatory sandboxes as a middle ground to test AI technologies safely before full production deployment Implementing AI operating systems that provide governance controls while still enabling AI innovation and productivity gains Building virtual distributed security mesh that balances the need for pervasive security with infrastructure flexibility Adopting a platform approach for AI implementation rather than individual use cases to balance efficiency with governance Focusing on capability building and outcome definition simultaneously rather than pursuing either in isolation
Thought Provoking Comments
I don’t think people have woken up to the fact that they are fast running towards the cliff. Because I genuinely think that the digital infrastructure in enterprises today are already fragile… So that’s why I’m saying that in all our excitement of AI, I’m very passionate and excited about AI, but I genuinely feel that people are not looking at the foundations… So you can’t build a skyscraper with a foundation of a bungalow, which is what they’re trying to do.
This metaphor powerfully reframes the entire AI adoption conversation from one of opportunity to one of foundational risk. It challenges the prevailing excitement about AI by highlighting that enterprises are building advanced AI capabilities on inherently fragile digital infrastructure.
This comment fundamentally shifted the discussion’s tone and became a recurring theme. Multiple subsequent speakers referenced this ‘fragility’ concept, with Daisy citing their AI readiness index showing the ‘ambition versus reality gap’ and other panelists building on the infrastructure concerns. It moved the conversation from theoretical AI benefits to practical implementation challenges.
Speaker: A. S. Lakshminarayanan
Case of AI is that it’s really happening at a breakneck speed… the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it and they have got a lot of motivation to put in effort and thought process into how do I use it more effectively. Then the persons who are actually using AI for their own benefit… So this is where there is a disconnect and this has to be really bridged.
This insight reveals a critical asymmetry in AI adoption – that malicious actors are more motivated and focused in their AI implementation than defensive users, creating a dangerous imbalance. It’s a sophisticated analysis of the strategic dynamics at play.
This comment introduced the concept of motivational asymmetry that influenced later discussions. Dharshan later built on this by discussing how AI could level the playing field for defenders, directly addressing this imbalance. It shifted the conversation from technical challenges to strategic and motivational ones.
Speaker: G. Narendra Nath
I think rather than focusing on whether this LLM is good or that LLM is good and so on, this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly… enterprises require an AI operating system… you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer.
This concept of an ‘AI operating system’ with distinct layers (context, agentic, trust/governance) provides a concrete architectural framework for managing AI complexity. It moves beyond abstract concerns to propose a systematic solution.
This framework became a reference point for other speakers. Dharshan later mentioned ‘AI security operating system’ as a parallel concept, and the layered approach influenced discussions about governance and control. It elevated the conversation from problem identification to solution architecture.
Speaker: A. S. Lakshminarayanan
Cybersecurity has so far been a very asymmetric equation. The intruders have always had an advantage over the defenders… But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack.
This reframes AI as potentially solving one of cybersecurity’s fundamental problems – the asymmetric advantage of attackers. It provides a hopeful counterpoint to the fear-based discussions while acknowledging the historical challenge.
This comment provided crucial balance to the discussion, shifting from predominantly risk-focused to opportunity-focused. It directly responded to earlier concerns about adversarial AI use and influenced Pradeep’s later discussion about AI as a ‘force multiplier’ for both sides.
Speaker: Dharshan Shanthamurthy
AI is quietly reshaping the risk equation within the enterprise… cybersecurity can no longer be just about protecting systems and the data… it needs to evolve into something more… how can it evolve to start protecting decision-making and trust? Because trust is starting to become measurable… through provenance, through authenticity, as well as verification.
This insight fundamentally redefines cybersecurity’s scope from protecting static assets to protecting dynamic processes like decision-making. The concept of ‘measurable trust’ introduces a new paradigm for security thinking.
This expanded definition of cybersecurity influenced the discussion’s scope, connecting to earlier points about trust and governance. It provided a bridge between technical security measures and business decision-making, elevating the strategic importance of the conversation.
Speaker: Pradeep Sekar
AI is going to scale decisions and when you’re scaling decisions you need to think of a different paradigm all together, and we are still talking in the old paradigm of what tasks can be automated… this is a new paradigm.
This distinction between ‘scaling transactions’ (previous technologies) versus ‘scaling decisions’ (AI) provides a fundamental conceptual framework for understanding AI’s unique impact. It challenges conventional thinking about AI as merely an automation tool.
This paradigm shift concept influenced the final portions of the discussion about future disruptions and strategic thinking. It provided a lens for understanding why traditional approaches to technology adoption may be insufficient for AI.
Speaker: A. S. Lakshminarayanan
Overall Assessment

These key comments transformed the discussion from a surface-level exploration of AI and cybersecurity to a deep, multi-layered analysis of systemic challenges and paradigm shifts. Lakshminarayanan’s infrastructure fragility metaphor set a sobering tone that grounded the entire conversation in practical reality, while his later insights about AI operating systems and decision-scaling provided constructive frameworks for moving forward. The interplay between pessimistic realism (infrastructure fragility, adversarial asymmetry) and optimistic pragmatism (leveling the playing field, measurable trust) created a balanced, nuanced discussion. The comments built upon each other, with speakers referencing and expanding on previous insights, creating a cohesive narrative that evolved from problem identification to solution frameworks to paradigm recognition. The discussion successfully bridged technical, strategic, and policy perspectives, largely due to these pivotal insights that elevated the conversation beyond typical AI hype or fear-mongering.

Follow-up Questions
How do we develop effective assessment frameworks for AI systems that cover both security and functional aspects?
There’s a critical need for standardized frameworks to test and assess AI systems before deployment, as current assessment infrastructure is lacking
Speaker: G. Narendra Nath
How can we bridge the cybersecurity maturity divide across different sectors adopting AI at similar rates?
Different sectors have varying levels of cybersecurity maturity, yet similar enthusiasm for AI adoption, creating uneven risk exposure
Speaker: G. Narendra Nath
What new business models and disruptions will emerge from AI-native companies in the next five years?
Understanding potential market disruptions is crucial for strategic planning, as AI may create new intermediaries similar to how internet/cloud technologies did
Speaker: A. S. Lakshminarayanan
How do we distinguish between cybersecurity issues and AI system malfunctioning or poor design?
The lack of clarity between security breaches and system design flaws creates challenges in proper incident response and remediation
Speaker: G. Narendra Nath
How can enterprises build the necessary infrastructure capacity to handle AI’s exponential demands on networks and systems?
Current digital infrastructure is fragile and may not support the increased network traffic, API calls, and computational demands of AI systems
Speaker: Daisy Chittilapilly and A. S. Lakshminarayanan
How do we make AI decision-making deterministic in critical applications while using inherently probabilistic technology?
Financial and citizen service applications require predictable outputs, but AI models are probabilistic by nature, creating a fundamental challenge
Speaker: Daisy Chittilapilly
What mechanisms are needed to measure and quantify trust in AI systems for enterprise decision-making?
Trust is becoming measurable through provenance and verification, but enterprises need concrete methods to assess and rate trustworthiness of AI-driven decisions
Speaker: Pradeep Sekar
How can we develop comprehensive capacity building programs to address the AI divide across different sectors?
Different sectors have varying levels of readiness for AI adoption, requiring targeted education and framework development
Speaker: G. Narendra Nath
What are the specific dependencies created by AI adoption and how can we mitigate the risks of those dependencies?
Understanding and managing dependencies is crucial for national security and business continuity as AI becomes more integrated into critical systems
Speaker: G. Narendra Nath

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence

U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the India AI Impact Summit brought together U.S. government officials and leaders from four frontier AI companies to discuss the development of open standards for AI agents, emphasizing the need for interoperable interfaces that enable global builders to create applications on top of these systems [5-11][15-22]. Sihao Huang highlighted that American firms are investing $700 billion in AI infrastructure this year and are competing fiercely to make models cheaper and more powerful for developers worldwide [13-14].


Michael Sellitto described Anthropic’s Model Context Protocol (MCP) as a universal open standard that lets AI models connect to existing enterprise and government data sources and tools through simple descriptions, thereby eliminating bespoke integrations and fostering competition [28-38]. He also introduced the SKILLZ protocol, which encodes reusable task instructions that can be transferred across vendors, further enhancing portability and interoperability [46-48].


Owen Lauder explained Google DeepMind’s agent-to-agent standard, which provides a digitized “clipboard” of an agent’s identity, capabilities, goals, data handling, and security requirements to enable seamless communication between agents [63-71]. He added that Google’s Universal Commerce Protocol (UCP) lets agents interact with websites and payment systems, opening new business possibilities [74-76]. Michael Brown illustrated how OpenAI’s commerce protocol could allow an agent to arrange a family vacation by booking flights and hotels autonomously, showcasing the practical benefits of shared standards [99-101].


Wifredo Fernandez noted that open standards accelerate innovation, create a “parallel Internet” for agents, and raise novel regulatory questions about agent-driven platforms [115-122]. Austin Marin outlined the Center for AI Standards and Innovation’s role within the Department of Commerce and NIST, announcing the Agent Standards Initiative, a request for information on agent security, and a draft on agent identity and authorization [132-154][146-152][155-165]. He also described upcoming sector-specific listening sessions to address challenges such as PII handling in education and healthcare, aiming to produce voluntary best-practice documents that build confidence in AI deployments [165-172].


Sihao drew parallels to early Internet standards like TCP/IP and HTTPS, arguing that security-driven standards are essential for widespread adoption of AI agents, just as SSL enabled e-commerce [198-207]. Michael Sellitto used an automobile analogy to show how standardized performance metrics and open standards give users confidence and allow switching between vendors or to open-source models [211-218]. Owen reinforced these lessons by referencing historical electrical standards that enabled global plug compatibility, urging the AI community to adopt technically robust, interoperable standards while avoiding fragmented solutions [239-250][231-250]. Participants agreed that open, consensus-based standards are crucial for a global AI ecosystem and that international collaboration through networks such as the International Network for Advanced AI Measurement, Evaluation, and Science is already underway [276-280][231-250].


The discussion concluded with a shared commitment to develop voluntary, interoperable, and secure AI agent standards that will foster innovation, democratize access, and support worldwide adoption of AI technologies [185-188][209-210].


Keypoints


Major discussion points


Rapid emergence of AI-agent protocols and their functional benefits – The panel highlighted a growing ecosystem of standards such as the Anthropic Model Context Protocol (MCP), Google DeepMind’s A2A agent-to-agent protocol, OpenAI’s commerce protocol, and XAI’s MacroHearts project, which are beginning to serve as industry de-facto standards [17-21]. MCP is described as a “universal open standard for connecting AI systems to the tools and data sources that people already use,” enabling agents to discover and use enterprise or government data without bespoke integration [28-38]. Google’s A2A protocol provides a “digitized clipboard” that shares an agent’s identity, capabilities, intent, data handling, and security requirements to facilitate direct agent-to-agent communication [64-71]. The Universal Commerce Protocol (UCP) and OpenAI’s commerce protocol aim to let agents transact with websites and payment systems, opening a new “agentic economy” [74-76][98-101].


U.S. government’s coordinating role through the Center for AI Standards and Innovation (CASI) and NIST – CASI, housed in the Department of Commerce and partnered with NIST, positions itself as the “front door” for industry to engage with the government, avoiding duplicated requests and fostering consensus-based voluntary standards [132-140][146-152]. Recent actions include issuing a Request for Information on AI-agent security [155-161], supporting NIST’s draft on agent identity and authorization [163-165], and planning sector-specific listening sessions (education, healthcare, finance) to surface real-world challenges [165-173].


Security, trust, and evaluation as prerequisites for widespread adoption – Panelists repeatedly linked trustworthy standards to the historic rollout of internet security (SSL/HTTPS) and automotive safety metrics, arguing that standardized security assessments will give users confidence to “trust…when to trust, and when not to trust” AI agents [206-207][211-218][219-227]. Analogies to car safety ratings and fuel-economy metrics illustrate how third-party, consensus-driven benchmarks can enable informed purchasing decisions for AI-enabled services [211-218].


Open, interoperable standards to prevent lock-in and promote global collaboration – The discussion emphasized that open protocols allow builders in India, Kenya, or elsewhere to switch models or providers without re-engineering, mirroring how early internet protocols (TCP/IP, HTTPS) unlocked global innovation and economic growth [188-190][194-199][202-207][224-226]. This openness is presented as a strategic U.S. policy choice contrasting with “closed” national internet initiatives [191-197].


International engagement and future work – Beyond the U.S., the panel noted active participation in the International Network for Advanced AI Measurement, Evaluation, and Science (IN-AIMES) and upcoming sector-specific listening sessions to gather global input, especially from emerging markets like India [165-170][276-280][274-275].


Overall purpose / goal of the discussion


The panel was convened to showcase the nascent ecosystem of AI-agent standards, explain how these protocols unlock interoperability, security, and commerce, and to outline the U.S. government’s role (through OSTP, CASI, and NIST) in coordinating voluntary, consensus-based standards that will enable a globally accessible, lock-in-free AI economy.


Tone of the discussion


The conversation remained collaborative and forward-looking throughout, beginning with enthusiastic introductions and a celebratory tone about industry progress. As technical details emerged, the tone shifted to a more explanatory, “building-the-foundation” mode, using historical analogies (internet, automotive standards) to underscore seriousness. Interspersed moments of light humor (e.g., Michael Brown’s accent joke) kept the atmosphere informal yet constructive. Overall, the tone stayed optimistic, emphasizing partnership between industry and government and a shared commitment to open standards.


Speakers

Sihao Huang – Senior Policy Advisor for AI, Emerging Tech, White House [S1]


Austin Marin – Acting Director, U.S. Center for AI Standards and Innovation, Department of Commerce [S4]


Wifredo Fernandez – Director for Global Government Affairs, XAI [S5]


Owen Lauder – Senior Director and Head of Frontier Policy and Public Affairs, Google DeepMind [S7]


Michael Sellitto – Head of Global Affairs, Anthropic [S9]


Michael Brown – Head of Growth and Operations (International), OpenAI [S11]


Additional speakers:


Mike Salito – Head of Global Affairs, Anthropic (mentioned in panel introduction)


Full session reportComprehensive analysis and detailed insights

The panel at the India AI Impact Summit brought together senior U.S. officials and leaders from four frontier AI companies-Anthropic, Google DeepMind, OpenAI and XAI-to examine how open standards can make AI agents interoperable and commercially viable. Sihao Huang opened by introducing himself as the White House senior policy adviser for AI and noting the presence of Austin Marin, director of the Department of Commerce’s Center for AI Standards and Innovation (CASI) together with the company representatives [3-6][7-12]. He reminded the audience that American firms are investing roughly $700 billion in AI infrastructure this year and are competing fiercely to deliver cheaper, more powerful models for developers worldwide [13-15]. He also cited the 802.11 Wi-Fi protocol as a concrete illustration of how government-backed standards enable global interoperability [190-193]. The session’s purpose, he explained, was to explore how standardized interfaces can enable a thriving “agentic economy” [16-22].


Panelists quickly identified a nascent ecosystem of agent-centric protocols that are already shaping the market. The most prominent is Anthropic’s Model Context Protocol (MCP), which many companies are adopting as a de-facto industry standard [20-22][28-38]. Google DeepMind presented its agent-to-agent (A2A) protocol, OpenAI described its own commerce protocol, and XAI referenced its secretive MacroHearts project [21][63-71][74-76][98-101]. Collectively, these efforts aim to replace bespoke, vendor-locked integrations with open, reusable specifications that developers in any country can leverage [23-24][46-48].


MCP is framed as a universal, open-source contract that lets an AI model discover and use existing data sources and tools simply by receiving a high-level description of the resource [28-36]. In practice, an agent can be told that “payroll data lives in the HR system” or that “revenue figures are stored in HEX”, and it will know how to retrieve the information just as a human employee would [34-36]. By eliminating the need to rewrite connectors for each new vendor, MCP creates a degree of interoperability that encourages competition, reduces lock-in, and enables data portability across vendors [37-38].


A complementary initiative is the SKILLZ protocol, which encodes task-specific instructions that can be taught to an agent once and then reused across different providers [46-48]. This mirrors the way a new employee is trained on organisational procedures; once the skill set is captured, any compatible agent can execute the task, and the skill set can be ported if a user switches from Anthropic to another vendor [46-48].


Google’s A2A protocol tackles the long-standing problem of agents communicating directly with one another. It defines a “digitised clipboard” that carries an agent’s identifier, capabilities, intent, data-access requirements and security constraints, thereby allowing two agents to exchange information without custom code or a shared code base [63-71]. Owen Lauder stressed that this metadata-rich exchange is fundamental to “greasing the wheels of the agentic economy” [72-73].


The commercial dimension is addressed by Google’s Universal Commerce Protocol (UCP) and OpenAI’s commerce protocol. Both enable agents to interact with web sites, payment gateways and other e-commerce services, opening the possibility for agents to autonomously book flights, reserve hotels or purchase goods on behalf of users [74-76][98-101]. Michael Brown illustrated this with a scenario in which an agent arranges a family vacation, highlighting how shared standards allow agents from different companies and jurisdictions to cooperate safely [99-101].


Security and trust were repeatedly identified as prerequisites for mass adoption. Sihao Huang likened the need for AI-agent security standards to the historic introduction of SSL/HTTPS, which unlocked e-commerce by assuring users that their transactions were safe [206-207]. Michael Sellitto reinforced this analogy with an automobile-industry metaphor: just as independent crash-test ratings and fuel-economy figures give consumers confidence in a vehicle, third-party benchmarks for AI-agent performance and safety would enable informed purchasing decisions and geopolitical resilience [211-218]. Owen Lauder added that security fields-such as authentication, data-handling policies and user-confirmation requirements-should be baked into the agent-to-agent metadata itself [71-73][227-229].


Austin Marin outlined the U.S. government’s coordinating role, describing CASI as the “front door” for industry to engage with the federal apparatus, reducing duplicated agency requests and fostering consensus-based voluntary standards [132-145]. He noted that CASI already has formal research or pre-deployment evaluation agreements with the panel companies [132-138]. CASI operates within NIST, which has a century-long tradition of facilitating industry-driven standards rather than imposing regulation [146-152]. Recent actions include a Request for Information on AI-agent security (closing in March) [155-161], a draft NIST ITL document on agent identity and authorisation [163-165], and planned sector-specific listening sessions (education, healthcare, finance) to surface real-world challenges such as handling personally identifiable information [165-173].


Panelists repeatedly drew parallels with historic standard-setting successes. Sihao Huang argued that the open, decentralised protocols that underpinned the early Internet-TCP/IP, HTTPS-were deliberately supported by the U.S. government and generated global prosperity while keeping the system open to competition [188-199]. He warned against “closed” national versions of the Internet, noting that only the open suite achieved worldwide scale [194-199]. Owen Lauder recalled early scepticism about online credit-card use, underscoring how security standards transformed the digital economy [206-207].


International collaboration was presented as essential to avoid fragmented ecosystems. CASI participates in the International Network for Advanced AI Measurement, Evaluation and Science (IN-AIMES), a ten-country forum that shares best-practice measurement methods and aligns evaluation methodologies across borders [274-278]. Austin Marin also highlighted that CASI publishes a blog post summarising the consensus reached within IN-AIMES, ensuring that standards are globally relevant and that emerging markets such as India can both contribute to and benefit from the shared layer [279-280].


Across the discussion, there was strong consensus that open, interoperable standards are the cornerstone of a democratic AI future. All speakers agreed that such standards prevent vendor lock-in, enable builders in India, Kenya and elsewhere to switch models without re-engineering, and create a “parallel Internet” of AI services [186-190][121-123][138-145][146-152]. Points of tension emerged, however. Sihao Huang and Austin Marin advocated a prominent U.S. leadership role in exporting an open AI stack, whereas Michael Brown cautioned that industry should remain the primary driver of technical norm-setting, with government acting only as a convenor [191-199][138-145][252-254]. Security priorities also diverged: Sihao emphasized the historical SSL/HTTPS model, Owen focused on embedding security metadata in agent-to-agent exchanges, Michael Sellitto highlighted the need for third-party performance metrics, and Wifredo Fernandez called for broader privacy, auditability and consent mechanisms [206-207][71-73][219-227][264-268]. Fernandez raised the novel regulatory question of whether agent-driven social-media platforms should be regulated, linking it to his broader call for privacy, auditability and consent [119-122]. He also referenced the “Malt Book” phenomenon as an example of how AI-driven content spreads on X (formerly Twitter) [98-101].


Future of AI standards discussion (chronological order)


Sihao Huang opened the forward-looking segment by asking how the standards ecosystem should evolve to support an expanding agentic economy. Michael Sellitto responded with an automobile-industry analogy, arguing that independent safety ratings and fuel-efficiency metrics-analogous to third-party AI-agent benchmarks-will be essential for consumer confidence and geopolitical resilience [211-218]. Owen Lauder then reflected on historic lessons, noting that the introduction of SSL/HTTPS and early credit-card security standards were pivotal in unlocking digital commerce [206-207]. Michael Brown added that while government can facilitate coordination, the technical details of standards should be driven by industry expertise, with the state acting as a convenor rather than a regulator [191-199]. Finally, Wifredo Fernandez emphasized privacy-centric principles, calling for robust auditability, consent mechanisms, and explicit regulation of agent-driven social-media platforms [119-122][264-268].


In conclusion, the panel affirmed a shared vision: develop voluntary, consensus-based standards that are technically robust, security-focused and globally interoperable, thereby fostering innovation, competition and trust in AI-agent ecosystems. Immediate actions include submitting comments to the CASI RFI on agent security, reviewing NIST’s draft identity standards, and participating in the upcoming sector-specific listening sessions [155-165][165-173]. Longer-term goals involve harmonising measurement practices through IN-AIMES, expanding open protocols such as MCP, SKILLZ, A2A and commerce standards, and ensuring that these frameworks embed privacy, auditability and user consent to support responsible deployment worldwide [276-280][186-190][211-218].


Session transcriptComplete transcript of the session
Sihao Huang

of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable and open to the rest of the world to sort of build on that for your own businesses, for your own benefits. And so we have an amazing panel here today. We have, so first of all, I’m Sihao Huang. I’m Senior Policy Advisor for AI at Emerging Tech at the White House. We’re joined with Austin Marin, who’s the Director for the Center for AI Standards and Innovation at the Department of Commerce, which really is the center for a lot of AI activity within the U .S. government, setting standards, driving innovation, measuring AI systems, improving metrology, and a lot of the smartest people in the U .S.

government are within Austin’s organization. And then we have the four frontier AI companies from the United States. So we’re very happy to be joined by Mike Salito, who is the Head of Global Affairs at Anthropic. We have Owen Lauder at Google DeepMind, who’s the Senior, Director and Head of Frontier Policy and Public Affairs. We have Mike Brown, who is head of growth and operations for OpenAI for our countries. And, of course, we have Weefi Fernandez, who is the director for global government affairs at XAI. So really the amazing lineup of U .S. industry. I said this in a previous panel, but American companies are spending $700 billion into infrastructure this year, just this year alone. And they probably won’t like it that I say this, but they’re competing very hard against each other to make AI models cheaper and more powerful for you guys to build on and to drive those applications.

And so this is going to be a panel on how we make that happen, how we standardize interfaces with those AI systems. And so first I’m just going to ask a question to the AI companies that are sat here. So over the past few months, I think, we’ve seen the emergence of an ecosystem of standards that move. To support the deployment of AI agents. I think one of the most notable ones is Anthropix Model Context Protocol, which a lot of other companies are building off of right now and is sort of becoming the industry standards. Of course, you have Google DeepMind’s A2A Agent -to -Agent Protocol, OpenAI’s Agentic Commerce Protocol, and then XAI, of course, has been working on its highly secretive and famous MacroHearts agent project.

And so all the companies here are very much involved in sort of this agent discussion. And so maybe open it up to the companies here to tell us a little bit about what these agent protocols actually do and what they have unlocked. And that’s sort of the builders who are sat here, the audience. What do they enable a software engineer or an AI engineer at India or other countries to create?

Michael Sellitto

Okay. Well, first I want to start off by thanking Suhao and OSTP for organizing this panel and all the people who are here. Thank you. So it’s great to be here with Austin. I think Anthropic has really had a really strong partnership with the Trump administration and appreciated the leadership of Secretary Lutnick in expanding and enhancing the Center for AI Standards and Innovation, which is really critical to making this technology work for everybody in a manner that’s safe, responsible, and open. MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use. So imagine the knowledge bases inside of an enterprise. You can imagine government data sources.

The Indian government, of course, is a real leader in, why am I forgetting the acronym right now, DPI, sorry, and just has massive amounts of data that are already digitized. And so MCP is a way that you can connect your AI models and agents to those data sets and also tools. And it really, you know, simple. intuitive way. You just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. And then the model will intuitively know how it can use those data sources the same way that somebody in your enterprise or your organization would know if I want to get payroll data, I need to go to this human resources system.

If I want to get data about, you know, our revenue, I need to go into HEX or whatever your particular tools are. You know, before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive. We also recently built SKILLZ, which is a software that’s been around for a long time.

It’s a software that’s been around for a long time. It’s a set of instructions that teach agents how to perform. specific tasks. The way that I describe this or think about it is, you know, imagine a new person joins your team. You spend a little bit of time teaching them, you know, how to do work the way that your organization does it. And then you expect them to just be able to follow those instructions all the time. So you kind of teach once and then they’re able to do that. It’s the same thing with skills, which also is another open protocol where you can build these skills. And then if you decide that, you know, you want to switch from Anthropic to any of the other fine companies here on the panel, you can move those skills over.

And so that interoperability and data portability is really a critical piece of making this an open and competitive environment.

Owen Lauder

Amazing. Thank you, Mike. And, yeah, thank you to Sehow. Thank you to OSTP and the U .S. government for the event and all the partnership. And a big thank you and congrats to our Indian hosts on a fantastic summit week. If you take a step back, it has been, I think, a really exciting week, a demonstration of how advanced AI is now being used around the world to do incredible things. It’s been really exciting. I think seeing the way that people are using Gemini right across India, really exciting to see the way that everyone in India from world -class scientists using AlphaFold to teachers and students using AI in the classroom. And I think with all of the progress that we’ve seen in the last few years, it’s easy to forget sometimes that this is still relatively new technology.

We’re still in the relatively early innings of working out how to develop this technology and use it for good. And one of the things that we need to do, I think Seahaw covered this very well in his opening gambit, is build out this ecosystem of technical standards to make sure that we can continue using this technology in the right ways. There’s a couple of ways that we’re thinking about these standards. One is technical standards, interoperable standards, and then also standards for testing these systems, making sure that we can use them in a reliable and secure way. We really want to contribute right across the piece here, so we’re excited. We have various standards that we have contributed to the ecosystem.

Our agent -to -agent standard that Seahaw mentioned. This is basically a standard for how… agentic systems talk to each other. At the moment, it’s a little bit tricky for agents to converse with each other. You have to often write bits of bespoke code for an agent to talk to an agent, or they have to be running on the same walled garden code base. So what we do with agent -to -agent is essentially have a sort of digitized clipboard of information that an agent will share with another agent. What’s my ID as an agent? What are my capabilities? What am I trying to do? How do I take data? What are my security requirements? This is going to be absolutely fundamental to sort of greasing the wheels of the agentic economy.

UCP, another standard that we’re working on, so we have our universal commerce protocol at Google. This essentially does the same thing, but it’s for how agents talk to websites and payment systems. This is going to be transformative for business. It’s great to be able to partner with companies right around the world, whether it’s Walmart and Target in the U .S. or Flipkart and Infosys in India that we’re working with across these agents. Excited to see what… everyone is going to do with the technology that we can enable with this.

Michael Brown

Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He got tied up in another panel, so I’m here. George and I work extremely closely together, but he has a much nicer accent because he’s from the U .K. I’m doing my best here. You’re doing very well, I might say, very well. For me, this is a fun panel because it feels like a very collaborative and cooperative opportunity to grow the pie, and the companies that are on either of our side are extraordinary companies with extraordinary humans, and it’s fun to just work with them in some of these areas. If I were going to kind of explain, why we’re here in this particular panel to my kids who are 9 -11, I would sort of say, look, are there countries out there in the world where when you get to a stoplight, red means go?

I don’t think so. I think mostly red means stop and green means go. I mean, if I’m wrong, I apologize. I’m not an expert. But, you know, having sort of shared understanding in countries, rich and poor, advanced and still developing around how things work, I think grows the pie because it allows builders to build in a way that everyone can kind of know that what they’re building is going to be both secure and is going to be accessible and hopefully enjoyable or useful to people anywhere in the world. And I think each of the companies up here is contributing something great to that. You know, I’ve joined OpenAthens. I relatively recently, but like MCP to me is something like I just knew it’s like that’s really important.

And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, right? And I think that’s terrific that it’s the thing. You know, Owen also mentioned in commerce, I don’t know if these standards compete or if it’s cooperative, but at OpenAI, we have a commerce protocol as well for the same thing, because there’s a world where these agents are going to be out shopping for us, which is kind of fun, right? So, you know, if the agent knows that you’re planning on taking a family vacation and it knows that you want to visit Goa and the agent can go actually secure your travel flights and your hotel, these commerce protocols can do that.

So agents of different companies, potentially in different countries, can all partner and work well together because they understand how they’re supposed to be looking for shared information and how that information should be shared. There’s kind of a shared understanding there. And so I think all of us are working to build these protocols to grow the pie, to create more democratization, more commerce, more benefit for everyone by having these common protocols in place.

Wifredo Fernandez

Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, frenetic and kinetic and chaotic, as I was saying earlier. So it’s just an honor to be here and to feel the energy and all the innovation and to meet a bunch of different builders across India. So Wefredo Fernandez, folks call me Weefy for short. It’s a nickname I got in the 90s before wireless Internet was a thing, so my name became relevant later. But, yeah, this is certainly a topic that brings us all together, which is wonderful. You know, XAI is only two and a half years old. So we’re all in this together.

So we’re all in this together. So we’re all in this together. So the foundational work done by these peer companies have enabled us to accelerate our development. We’re better because of those, and we’re better because we can all build on top of those. And these standards and protocols that folks have built and that we sort of lay out and sort of agree to as an industry and as governments really make sure that not just us four compete, right? This enables a ton of innovation. So, you know, on the X side, and, you know, XAI and X sort of operate in tandem, it’s been really neat to see the AI community sort of build and test and discuss and debate in public.

So, like, when Malt Book was taking off, I think you likely found out about it on X. And so it’s just neat to see the ecosystem sort of converge in that discussion space. And just in thinking about this panel and thinking about MoldBook in particular, it’s like, well, do we regulate social media platforms that are agent driven? Just it brings like all these really novel questions about about how we regulate. But I think at the end of the day, we all agree that these open standards that are creating sort of this call it a layer, call it a new ecosystem, call it a parallel Internet. I just really crucial for for our development of the Internet writ large.

And so, yeah, excited about the panel and the discussion here today.

Sihao Huang

Thank you so much. Your name is formalized in the 802 dot 11 protocol, which is what allows my phone to connect to the Internet in D .C. and here in India. So it’s extremely relevant. I’m going to use that. That’s awesome. So I think we’ve heard a little bit from our companies who are engaging a lot of dynamic activity, pushing out agent protocols of all kinds. And I think. There’s a lot of industry excitement over agents right now. One of the big announcements that we’re here to make, also Director Carrazio’s made early on the main stage, is the Agent Standards Initiative, and that is something that is let out of Casey in NIST. So I’ll turn to Austin to introduce that.

Austin Marin

Absolutely, and thanks, Hal, and thank you to OSTP for convening this event and to my fellow panelists. I’ll start with a brief introduction of my organization. So I am the Acting Director for the U .S. Center for AI Standards and Innovation. Our background, we were founded about two years ago as the U .S. AI Safety Institute. In June of last year, Commerce Secretary Howard Lutnick refounded us as the Center for AI Standards and Innovation, which signaled a shift from sort of safety concepts to standards and innovation. And our remit is to be the front door to industry to working with the U .S. government. There’s, I think, two aspects of our organization I think that bear note is, first, that we’re located within the Department of Commerce.

We are commerce -focused. We are industry -focused. We work. We work with all of the companies on this panel. Some of them we have formal research. or pre -deployment evaluation agreements with so that we can work with them on their models and the research questions they’re tackling. We also do take seriously our role trying to serve as a front door to the U .S. government for industry. We want to make sure that when industry is trying to navigate government that they’re speaking to the right people, that the people in government they’re speaking to have advisors who understand frontier AI and agentic AI, and also that the industry isn’t being overwhelmed by duplicative requests from different aspects of government.

You don’t want 10 different agencies asking the same company basically the same thing and creating unnecessary work, and so we try to act in sort of a coordinating role to make sure that industry is being heard and they’re navigating U .S. government. The other aspect of our organization that bears note is we’re located within NIST, the National Institute of Standards and Technology, and NIST has an over -century -long track record of not regulating but helping industry through, consensus, develop voluntary standards and best practices. Acting Director of NIST, Craig Burkhart, he likes to talk about taillights, brake lights on the back of a car. I’m sure you all see them in India. It’s the same color red as it is in the U .S.

That’s because it was a NIST standard of what exactly color red is going to be on the taillights. But another important aspect of that anecdote is it wasn’t government that said this is the color red that you all must use. It was industry came together, and with the help of NIST experts through a convening, they agreed on what the color should be. And so now when we look at it, what does the future bring and where can NIST bring its industry -driven, consensus -based voluntary standards work into the new AI world, we’re looking to AI agent standards. So as to how said, we announced this week an AI agent standards initiative, which is looking at all facets of AI and AI agents.

There’s a couple aspects of it that have already been announced that we’re working on, and I’ll tick through those relatively quickly. The first is we have a request for information. I’m going to go ahead and get this. So we have a request for information. We have a request for information. in the field. It closes in March and we encourage you to engage with us and provide comments on AI agent security. AI agents obviously bring a whole host of new security challenges and we’d love to hear from you and your organizations about what challenges you are facing. Learning and identifying those challenges is a first step. Once we identify those challenges we can then take the next step of seeing where can NIST’s approach of voluntary standards and best practices documents, how can they help address and mitigate those those challenges.

Another aspect, our colleagues at NIST, the Information Technology Laboratory or ITL, they have a draft out for comment on AI agent identity and authorization. Again, encourage you to engage and interact with them. A third initiative that we recently announced is we’re going to hold sector specific listening sessions hopefully in April in the sectors of education, healthcare, and finance where we’re going to convene various members of industry and say to them look there’s this great technology out there called AI have you heard of it, AI agents, why aren’t you adopting it? it? What challenges are you facing? And we may not be able to solve those challenges, but maybe we can. And so one example I give, and I don’t know that it’s going to be something we find out, but for instance, in the education and healthcare sector, there’s business concerns and existing regulatory concerns about PII, personally identifiable information.

And perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII. And so that’s something that KC, my organization, could develop metrology, benchmarks, evaluations, best practices, documents that could give confidence to those types of institutions that the agents are performing as desired. And maybe that’s a step that we could take through voluntary consensus driven best practices and standards that unlocks adoption. So we’re very focused on that. We’re looking forward to learning what those challenges are. I don’t know if the challenge I mentioned is actually a challenge facing industry. And that’s part of NIST’s approach, which is we … In D .C., we only see a small slice of what’s going on in industry.

We only have a tiny window into the world. And so it comes from a place of humility. We don’t know what the challenges people are facing. The companies that are on this panel, they’re doing an incredible job coming up with protocols for some of the challenges that they’re facing. We talked about agent -to -agent for how agents communicate. We talked about MCP for how agents navigate databases. We talked about UCP and OpenAI’s commerce protocol for engaging in e -commerce. And I’m sure through these conversations, we’re going to identify other areas where open source protocols, where standards, best practices could help unlock adoption and implementation. And we’re really excited to work with both you and all your institutions and companies on stage to identify those opportunities and see how we can leverage NIST’s convening authority to help.

Sihao Huang

Thank you so much for that, Austin. I think to reemphasize, this standards initiative is really wanting to make sure that the products that we build, on top of it, are able to connect with each other into our… such that if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well. American companies can buy from them as well. And similarly, if you want to switch to a different model, nothing is sort of locked in. And I think this really ties back to a perspective that we sort of, as U .S. government, in particular the Trump administration, has about AI and AI products. We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America.

I think there was a perspective in the U .S. from a previous administration that technology had to be strictly locked down, and we think that’s a mistake. We want to share the best AI technologies with the rest of the world, and that’s also a sort of leading message that our delegation has here at the India AI Summit. And when we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralized, that were tied to particular nations, at their own telecom networks. and they saw a little bit of success. A lot of them were state -subsidized, but none of them really scaled to the global level of the World Wide Web.

And the World Wide Web became so successful precisely because of the protocols that the U .S. government had supported. The U .S. government had made a very intentional effort to make sure that the Internet was a decentralized system, created protocols like TCPIP, HTTPS, the sort of Internet suite that was actually funded by the U .S. government back then to create independent development of these protocols that enabled the rest of the world to build on that. And what you had is really this win -win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history.

And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as well. I think just to add a bit on to what Austin had said, sort of the agent security. piece. Why is agent security so important to us? It’s precisely because of adoption. You need security -driven adoption. If you look back again also at the history of the internet, the development of the secure sockets layer, SSL, and then eventually HTTPS, was what enabled e -commerce. And so, again, I think it’s a lot about the efforts that we’re going to be working with industry together to make sure that there is this standards ecosystem, that there are these interoperable interfaces that everyone can build on and trust to create the AI economy that we’re all looking forward to.

So I’ll stop ranting, but I’ll turn to the companies here. And I guess I’ll ask you all, how do you see sort of the future of AI standards and agent development? And how can AI agent standards really reflect the same principles that enable the open internet, including interoperability and including security?

Michael Sellitto

I feel like I need to somehow fix this. an automobile analogy in here since there’s been a setting. Maybe I’ll use my favorite one, which is right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle. So it will tell you what the fuel economy is, how far can you drive on a gallon or liter of gas, how does it perform in various types of crash tests. These are all metrics that are done in a standardized way that are oftentimes done by third parties, and so you can have sort of trust and confidence in them, and you can know what kind of car you want to buy.

Maybe I’m a single person and I like to drive fast, and so I’m just worried about head -on collisions because I’m going to be going as fast as the car can. I’m going to be driving as fast as the car can possibly go, and that’s the biggest danger for me. Or maybe I have a family and I’m worried about you know, what happens if we get hit from the side and I’ve got kids in the back seats. You know, a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that, you know, customers and governments and the public, you know, can have. I think another real benefit, and it’s really aligned with, I think, some things that Michael Kratzios, the OSTP director, talked about today and also in an op -ed that he had in the Financial Times around exporting the American AI stack, right?

There’s a lot of concerns today about sovereignty, about having control over systems in your data and so on. And a way that I think you can both use the best technology in the world, which sometimes comes from American companies, but also have confidence that there’s resilience in the system, is really having things be built to open. Open standards, right? And that gives you the ability to… to decide to make changes. If today Anthropic is producing the best technology and tomorrow it’s X or it’s OpenAI or someone else, you can change. Or maybe an open source model gets good enough at the use case that you want and you want to switch over from a proprietary model to an open source model.

So I think that’s what this can enable. I think that’s sort of the opportunity that we have ahead of us. And I think that the vision of the AI security standards work that Casey’s going to be working on is, if you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise or what have you, you need to have some sense that there’s security, there’s authentication for things, that there’s an ability to come back and check with the user before making certain significant decisions or taking certain decisions. Certain significant actions. And that’s… You can test and evaluate and report that information in a way that is intelligible to the customer, that they know what they’re buying, and they know when to trust, and they know when not to trust.

What’s up there?

Owen Lauder

Yeah, well said, and I endorse a lot of what Mike mentioned there and Austin and Sihau as well. I do think there’s a lot you can learn from the history of standards in various different industries that we can apply to AI. Sihau mentioned some of the early Internet standards. I mean, I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane. And it sort of was when you had information being shared in plain text in a totally unencrypted way. Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be.

History of electrical standards as well. Actually, this was something that drove the adoption of electrical products in the late 20th and early 21st century. You had a scientific approach to standardizing units of. measurement like ohms and volts and amperes, which allowed power supplies to connect their energy to the grid. It also meant that you could invent things like fuses, which could be set to a certain amperage, and if you had an electrical current above that, it would shut itself off. So I think we need to continue learning from history. I think there are a few principles that we should take forward as we do that. I think open standards, as we’ve been discussing, is the right way to go.

You need technically robust standards that are really informed by an understanding of the technology and how they work, and we should be looking to prioritize interoperability as well. Maybe a final thought for this piece is also learning from standards that are not done well. There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world that you’re developing your product for.

So things to avoid. as well we need to be mindful of.

Michael Brown

automobile industry or something, two humongous but separate industries, and how they’re going to have to come together to set up norms for how agentic systems work and how data is shared, I think government can probably play an important role in bringing together industries to establish those dialogues. But the industries certainly still need to be front and center in establishing what works for them because they are the practitioners and the experts on what their customers need, what their colleagues need. And so I think we’re all going to have to kind of navigate that world together and figure out what is the role for the research labs, how does government support, and then how does industry play a leadership role in both governing and building for itself industry -specific standards for the future of AI.

Wifredo Fernandez

Yeah, I think this conversation has been a bit of a history lesson. I appreciate that. Thank you. And it made me think about how I used to get music when I was a kid, and some of the panelists may appreciate. You know, there were these music catalogs that would come to your house. You’d select however many compact discs you wanted, CDs. You’d put cash or a check in an envelope and send it away. And some weeks later, magically, some CDs would appear on your doorstep. So when I think about, you know, instructing an Asian to go download music or acquire music on my behalf, like, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or, you know, a process that could not be audited.

So I think sort of the guiding principles that have developed the Internet still apply. We want privacy -preserving technology. We want technology that allows us to audit. We want technology that considers authenticity. We want technology that considers means of consent. And to Michael’s point, I think ultimately agents serve the user and agents serve organizations. And so if we view it through that lens, it should guide us right. They don’t serve us as the model developers.

Sihao Huang

Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. I love that. But we’re also here right now at the India AI Impact Summit talking to a country of builders, talking to the developing worlds, which are some of the most dynamic AI markets in the world. And so I think it will also be amazing to hear from the panelists here, including Austin, how you all are engaging with the rest of the world on these standards, how your organizations are engaging with other countries on AI. And one of the most exciting applications you’ve seen develop on top of your standards and products.

Austin Marin

I guess I’ll lead off. One of the main forums that Casey engages internationally is through the International Network for Advanced AI Measurement, Evaluation, and Science. It’s a bit of a mouthful of a name, but it’s ten countries that have established AI security institutes or, like we do, the Center for AI Standards and Innovation, and we meet a couple times a year. We also engage in informal technical and scientific exchanges and we share best practices in measurement and evaluation science. In December, we met in San Diego on the sidelines of the NURFS conference and we sat down and discussed sort of open questions in measurement science and the challenges that we’re facing, and we published a blog post, I think, about a week ago that summarizes some of the periods of consensus and the open questions.

And there, the work we’re doing, I think, is very important because when we talk about the evaluation, of AI systems, particular capabilities, particular security vulnerabilities, etc. It’s important for us to have consensus on the methodologies.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“The panel at the India AI Impact Summit brought together senior U.S. officials and leaders from four frontier AI companies—Anthropic, Google DeepMind, OpenAI and XAI.”

The transcript excerpt S3 lists four frontier AI companies from the United States (including Anthropic and Google DeepMind) participating in the summit, confirming the presence of senior U.S. officials and the four companies mentioned.

Confirmedhigh

“Austin Marin is the director of the Department of Commerce’s Center for AI Standards and Innovation (CASI).”

S21 identifies Austin Mayron (likely the same individual) as the Acting Director of the U.S. Center for AI Standards and Innovation, confirming his leadership role at the agency.

!
Correctionhigh

“American firms are investing roughly $700 billion in AI infrastructure this year.”

The knowledge base entry S31 cites $500 billion invested in AI and frontier technologies, indicating that the $700 billion figure in the report is not supported and appears overstated.

Confirmedmedium

“Anthropic’s Model Context Protocol (MCP) is a de‑facto industry standard for agent‑centric protocols.”

S1 highlights MCP as a universal standard for connecting AI systems to tools and data, and S68 notes that market adoption can create de‑facto industry standards, supporting the claim.

Confirmedmedium

“Speakers emphasized that open, government‑backed standards (like the 802.11 Wi‑Fi protocol) enable global interoperability.”

S2 summarizes that all speakers strongly advocated for open, interoperable standards that enable cross‑vendor compatibility, echoing the report’s point about standards such as Wi‑Fi.

Confirmedmedium

“Google DeepMind presented its agent‑to‑agent (A2A) protocol.”

S1 records that Google DeepMind discussed its A2A protocol during the summit, confirming the presentation.

Additional Contextlow

“Google’s A2A protocol defines a “digitised clipboard” that carries an agent’s identifier, capabilities, intent, data‑access requirements and security constraints.”

While S1 confirms the existence of an A2A protocol, it does not provide the detailed “digitised clipboard” description; the report adds this specific technical detail.

External Sources (70)
S1
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S2
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. …
S3
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as we…
S5
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — -Wifredo Fernandez- Director for Global Government Affairs at XAI
S6
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, f…
S7
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S8
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with …
S9
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S10
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S11
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He g…
S12
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, …
S13
WS #75 An Open and Democratic Internet in the Digitization Era — Open standards promote interoperability and prevent lock-in to proprietary systems
S14
WS #283 AI Agents: Ensuring Responsible Deployment — Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interact…
S15
Building Population-Scale Digital Public Infrastructure for AI — Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking…
S17
Challenging the status quo of AI security — Connection between observed security challenges and need for standards Given the new security challenges that emerge wh…
S18
Interdisciplinary approaches — With regard to standardisation, almost continuous efforts are made to replace public standards with private and propriet…
S19
Launch / Award Event #96 Empower the Global Internet Standards Testing Community — Alena Muravska: colleagues here in the room but also colleagues online and I’m very grateful for this opportunity to be …
S20
AI for Social Good Using Technology to Create Real-World Impact — This discussion at the India AI Impact Summit focused on how open networks and digital public infrastructure (DPI) can e…
S21
Agentic AI in Focus Opportunities Risks and Governance — Yeah, absolutely. So at CAISI , our focus right now is truly on unlocking innovation and adoption. And we work in the st…
S22
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Larter emphasised that the emerging agentic economy requires new technical protocols for agents to communicate with each…
S23
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S24
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “I think what we need to do is we need to go for a, you know, a strategic decision -making in the sense that what is it …
S25
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And finally, there’s a global dimension where International Solar Alliance is involved. What are going to be the interop…
S26
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all pr…
S27
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S29
Agentic AI in Focus Opportunities Risks and Governance — “These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic….
S30
Challenging the status quo of AI security — This is necessary for establishing audit trails and accountability in agent systems
S31
Comprehensive Report: President Trump’s Address to the World Economic Forum in Davos — It’s a beautiful thing to see. The leadership of the country has been very good. They’ve been very, very smart. Number …
S32
Trump administration poised to boost crypto influence in US policy — The incomingTrump administrationis set to shape the future of cryptocurrency andblockchaintechnology in the United State…
S33
https://app.faicon.ai/ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S34
AI analysis of an interview Musk-Trump — Further, Musk lends support to Trump as a political leader, endorsing his policies as ‘the right path’ for ensuring Amer…
S35
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — The regulatory approach for social media platforms that are agent-driven needs to be determined
S36
 WSIS Action Line C9: Milestones, Challenges and Emerging Trends in Freedom of Expression and Media Development — Educational initiatives for diverse age groups, such as the young and elderly, are vital in preventing the spread of mis…
S37
Young Brains and Screens — Regulation is seen as a necessary measure for social media platforms. Concerns about the rapid erosion of shared humanit…
S38
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Le Fevre Cervini advocates for regulation to prevent deepfakes and hold social media platforms accountable for spreading…
S39
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standard…
S40
From Technical Safety to Societal Impact Rethinking AI Governanc — Explanation:Both speakers support government involvement but disagree on scope – Ioannidis wants to keep core technology…
S41
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S42
High-level AI Standards panel — Effective coordination requires mechanisms for standards development organizations to coordinate globally through strate…
S43
From principles to practice: Governing advanced AI in action — Chris emphasizes the importance of coordinating globally to standardize frontier AI risk management frameworks. He notes…
S44
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Austin Marin Industry-led, consensus-based approach to standards development is prefer…
S45
Digital standards — ‘Standards can underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustwo…
S46
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — In conclusion, platform regulation involves addressing various legal challenges, promoting competition, and addressing c…
S47
WS #283 AI Agents: Ensuring Responsible Deployment — Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interact…
S48
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Evidence:Currently agents need bespoke code to communicate or must run on the same code base. The protocol will be funda…
S49
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Larter emphasised that the emerging agentic economy requires new technical protocols for agents to communicate with each…
S50
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Michael Brown- Wifredo Fernandez- Austin Marin- Sihao Huang Currently agents need bespoke code to c…
S51
Agentic AI in Focus Opportunities Risks and Governance — Evidence:CAISI launched an AI agent standards initiative, issued an RFI on AI agent security, and announced sector-speci…
S52
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S53
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — Matthew Liao:Thank you, Kyung. So hi, everybody. Sorry, I couldn’t be there in person, but I’m very honored and delighte…
S54
Launch / Award Event #96 Empower the Global Internet Standards Testing Community — Alena Muravska: colleagues here in the room but also colleagues online and I’m very grateful for this opportunity to be …
S55
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Sally Wentworth Merci beaucoup. Je m’appelle Sally Wentworth. Je suis la présidente et le directrice général de l’Intern…
S56
WS #75 An Open and Democratic Internet in the Digitization Era — Open standards are foundational to the Internet and technological innovation, promoting interoperability and preventing …
S57
Lightning Talk #7 Privacy Redefined: equitable Access in the AI Age — Patricia Larasgita opened by explaining that the Safer Internet Lab is a multi-stakeholder partnership focused on disinf…
S58
The Global Power Shift India’s Rise in AI & Semiconductors — This panel discussion focused on India’s strategic positioning in artificial intelligence and semiconductor technologies…
S59
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The initial panel featured Ambassador Sergio Gor, Secretary S. Krishnan, and industry representatives discussing the fou…
S60
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — This panel discussion at a major technology conference examined India’s artificial intelligence ambitions through the le…
S61
Panel Discussion AI in Healthcare India AI Impact Summit — This comprehensive discussion on AI in healthcare brought together diverse perspectives from technology, clinical practi…
S62
Panel Discussion AI in Healthcare India AI Impact Summit — Thank you for having me. I’d say we think healthcare, is certainly one of the areas where we’re going to be able to do a…
S63
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — This discussion focused on AI assurance and the challenges of ensuring AI systems, particularly emerging agentic AI, are…
S64
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m …
S65
WS #204 Closing Digital Divides by Universal Access Acceptance — Allison O’Beirne: So Allison, please you have the floor for your first intervention. Thank you very much. Thanks so much…
S66
Internet Engineering Task Force Open Forum | IGF 2023 Town Hall #32 — The IETF is the premier Standards Development Organization for Internet protocols. Its mission is to make the Internet w…
S67
Widening Lens: A New Narrative for Media Coverage of Cyberspace — The event’s panelists agreed that ecosystem development plays a pivotal role in stimulating the cybersecurity market. Th…
S68
Global Standards for a Sustainable Digital Future — Market adoption sometimes overtakes formal standards development, creating de facto industry standards
S69
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial…
S70
Mistral AI unveils powerful API for autonomous agents — French AI startup Mistral AIhas steppedinto the agentic AI arena by launching a new Agents API. The move puts it in dire…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sihao Huang
6 arguments196 words per minute1363 words415 seconds
Argument 1
Open, Interoperable AI Agent Standards – Open standards prevent lock‑in and enable global interoperability (Sihao Huang)
EXPLANATION
Sihao argues that open, interoperable AI standards are essential to avoid vendor lock‑in and to allow AI agents to work together across different countries and platforms. By keeping standards open, builders worldwide can adopt, switch, and combine AI models without being tied to a single provider.
EVIDENCE
He explained that standards should let a builder in India or Kenya use American AI products and switch models without being locked in, emphasizing the need for global interoperability and open access to AI technologies [186-190]. He also referenced the broader goal of sharing the best AI technologies with the world, likening it to the historic open Internet model [191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open, voluntary standards are highlighted as essential to avoid vendor lock-in and promote worldwide competition in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S13].
MAJOR DISCUSSION POINT
Preventing lock‑in through open standards
AGREED WITH
Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – Anthropic’s Model Context Protocol is becoming the de‑facto industry standard (Sihao Huang)
EXPLANATION
Sihao notes that the Anthropix Model Context Protocol (MCP) is emerging as the de‑facto standard that many companies are building upon for AI agent interactions. This protocol is shaping the industry’s approach to connecting models with external data and tools.
EVIDENCE
He highlighted MCP as one of the most notable emerging standards that many other companies are adopting, describing it as becoming an industry standard [19-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anthropic’s Model Context Protocol is identified as an emerging industry standard in the discussion of early AI agent protocols [S14].
MAJOR DISCUSSION POINT
MCP as emerging industry standard
Argument 3
Security, Trust, and Audibility – Historical SSL/HTTPS security standards enabled e‑commerce; similar security standards are needed for AI agents (Sihao Huang)
EXPLANATION
Sihao draws a parallel between the historical development of SSL/HTTPS, which enabled secure e‑commerce, and the current need for comparable security standards for AI agents. He suggests that establishing such standards will build trust and facilitate AI‑driven commerce.
EVIDENCE
He referenced the development of SSL and HTTPS as foundational security standards that unlocked e-commerce, arguing that similar security standards are required for AI agents today [206-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel draws a direct parallel between SSL/HTTPS enabling e-commerce and the need for comparable AI security standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and notes the broader requirement for trust in digital commerce [S16].
MAJOR DISCUSSION POINT
Need for AI security standards analogous to HTTPS
Argument 4
Historical Lessons on Standards – Internet protocols (TCP/IP, HTTPS) illustrate how open standards drive global adoption and prosperity (Sihao Huang)
EXPLANATION
Sihao explains that open Internet protocols such as TCP/IP and HTTPS were crucial in creating a decentralized, globally adopted network that spurred economic growth. He uses this history to argue for similar open AI standards.
EVIDENCE
He described how the U.S. government funded and supported protocols like TCP/IP and HTTPS, which enabled a decentralized Internet and generated worldwide prosperity, including the wealth of Silicon Valley [198-200].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Historical success of TCP/IP and HTTPS as open protocols that spurred global economic growth is emphasized as a model for AI standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S13].
MAJOR DISCUSSION POINT
Open Internet protocols as a model for AI standards
AGREED WITH
Owen Lauder, Michael Sellitto
Argument 5
Government and International Coordination – The U.S. government’s historic role in internet standards guides the push to export an open AI stack (Sihao Huang)
EXPLANATION
Sihao argues that the U.S. government’s past involvement in establishing open Internet standards should inform its current strategy to promote an open AI stack globally. Exporting such an open AI ecosystem can replicate the economic benefits seen with the Internet.
EVIDENCE
He referenced the U.S. government’s intentional effort to create decentralized Internet protocols and the resulting global benefits, suggesting a similar approach for AI to ensure worldwide access and prosperity [191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The U.S. government’s past involvement in creating open Internet standards is cited as a template for promoting an open AI ecosystem <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S2].
MAJOR DISCUSSION POINT
Leveraging historic Internet standards for AI export
Argument 6
Global Impact and Democratization – Standards allow builders in India, Kenya, and elsewhere to use and switch between AI models freely (Sihao Huang)
EXPLANATION
Sihao emphasizes that open AI standards enable developers in diverse regions to adopt, integrate, and transition between different AI models without technical barriers. This democratizes AI access and supports global innovation.
EVIDENCE
He stated that standards should let builders in India, Kenya, and other countries use and switch AI models freely, ensuring no lock-in and fostering global collaboration [186-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of builders in India and Kenya benefiting from open AI standards are provided in the discussion of global interoperability <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S2].
MAJOR DISCUSSION POINT
Enabling global AI model interoperability
A
Austin Marin
4 arguments191 words per minute1263 words395 seconds
Argument 1
Open, Interoperable AI Agent Standards – Voluntary, consensus‑based standards avoid duplicated government requests (Austin Marin)
EXPLANATION
Austin explains that voluntary, consensus‑based standards reduce the burden on industry by preventing multiple government agencies from issuing overlapping requests. This coordinated approach streamlines engagement between the private sector and the government.
EVIDENCE
He described the Center’s role in acting as a front door for industry, emphasizing the need to avoid ten different agencies asking the same company for similar information, and highlighted the importance of coordinated, voluntary standards [138-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voluntary, consensus-based standards are presented as a way to reduce overlapping agency requests and streamline industry engagement <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and [S2].
MAJOR DISCUSSION POINT
Avoiding duplicate government requests through consensus standards
AGREED WITH
Sihao Huang, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Argument 2
Security, Trust, and Audibility – Request for information on AI agent security; draft standards on identity and authorization; sector listening sessions to address PII concerns (Austin Marin)
EXPLANATION
Austin outlines a series of initiatives: a Request for Information (RFI) on AI agent security, a draft NIST standard on agent identity and authorization, and sector‑specific listening sessions to identify challenges such as PII handling in education, healthcare, and finance.
EVIDENCE
He announced the RFI that closes in March and encourages comments on AI agent security [155-162]; mentioned a draft NIST document on identity and authorization [163-165]; and detailed upcoming listening sessions in education, healthcare, and finance to uncover adoption barriers and privacy concerns [165-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RFI on AI agent security, the draft NIST identity/authorization standard, and sector-specific listening sessions are all described in the panel summary <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Gathering stakeholder input on AI agent security and privacy
Argument 3
Government and International Coordination – The Center for AI Standards and Innovation serves as the industry front door; NIST provides voluntary consensus processes; participation in the International Network for Advanced AI Measurement, Evaluation, and Science (Austin Marin)
EXPLANATION
Austin describes the Center’s role as the primary liaison between industry and the U.S. government, its placement within NIST for voluntary standard development, and its participation in an international network of AI measurement and evaluation institutes.
EVIDENCE
He introduced the Center’s background, its location within the Department of Commerce and NIST, and its mission to coordinate industry engagement and avoid duplicated requests [132-145]; highlighted NIST’s century-long tradition of voluntary standards [146-152]; and noted involvement in the International Network for Advanced AI Measurement, Evaluation, and Science with ten member countries [276-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Center’s role as a front door, its placement within NIST’s voluntary standards framework, and its participation in an international AI measurement network are outlined in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and further detailed in [S21].
MAJOR DISCUSSION POINT
Center’s coordinating role and international collaboration
Argument 4
Global Impact and Democratization – Sector‑specific listening sessions aim to uncover and address adoption barriers in education, healthcare, and finance globally (Austin Marin)
EXPLANATION
Austin explains that the Center will hold listening sessions in key sectors to identify challenges—especially around PII—and develop metrology, benchmarks, and best‑practice documents that can increase confidence in AI agent deployments.
EVIDENCE
He detailed plans for sector-specific listening sessions in education, healthcare, and finance, describing how they will gather challenges such as PII concerns and potentially lead to standards that enable adoption [165-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sector-focused listening sessions in education, healthcare, and finance are highlighted as mechanisms to identify and mitigate adoption challenges <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Sector listening sessions to drive AI adoption
W
Wifredo Fernandez
5 arguments156 words per minute603 words231 seconds
Argument 1
Open, Interoperable AI Agent Standards – Open standards create a “parallel Internet” that fuels innovation and competition (Wifredo Fernandez)
EXPLANATION
Wifredo contends that open AI standards will form a new, parallel layer to the Internet, fostering competition and rapid innovation across the AI ecosystem. This “parallel Internet” will enable diverse builders to develop on shared protocols.
EVIDENCE
He described open standards as creating a “layer, call it a new ecosystem, call it a parallel Internet” that is crucial for the development of the broader Internet [121-123].
MAJOR DISCUSSION POINT
Parallel Internet concept for AI
AGREED WITH
Sihao Huang, Michael Sellitto, Owen Lauder, Michael Brown, Austin Marin
Argument 2
Specific Protocols and Their Functions – XAI’s MacroHearts project contributes to the emerging agent ecosystem (Wifredo Fernandez)
EXPLANATION
Wifredo notes that XAI’s secretive MacroHearts project is part of the broader movement toward agent‑centric AI, adding to the ecosystem of standards and capabilities being built by frontier AI companies.
MAJOR DISCUSSION POINT
MacroHearts as part of agent ecosystem
Argument 3
Security, Trust, and Audibility – Standards must embed privacy‑preserving, auditability, authenticity, and consent mechanisms (Wifredo Fernandez)
EXPLANATION
Wifredo argues that AI standards need to incorporate core privacy and security principles, including auditability, authenticity, and user consent, to ensure trustworthy AI deployments.
EVIDENCE
He listed guiding principles such as privacy-preserving technology, auditability, authenticity, and consent mechanisms as essential for AI standards [264-268].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for privacy-preserving, auditable, and consent-driven AI standards are echoed in discussions of emerging security needs for AI agents [S17] and the push for open, privacy-focused standards [S18].
MAJOR DISCUSSION POINT
Embedding privacy and consent in AI standards
AGREED WITH
Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Michael Brown
Argument 4
Government and International Coordination – Discussion of regulating agent‑driven social media highlights the need for policy alignment (Wifredo Fernandez)
EXPLANATION
Wifredo raises the question of whether social media platforms that are driven by AI agents should be regulated, underscoring the need for coordinated policy responses as AI agents become more pervasive.
EVIDENCE
He asked whether regulators should address social media platforms that are agent-driven, noting the novel regulatory questions this raises [119-121].
MAJOR DISCUSSION POINT
Regulating agent‑driven social media
Argument 5
Global Impact and Democratization – Open standards foster competition, innovation, and a “parallel Internet” that benefits all regions (Wifredo Fernandez)
EXPLANATION
Wifredo reiterates that open AI standards will generate competition and innovation, creating a parallel Internet that offers benefits globally, especially for emerging markets.
EVIDENCE
He emphasized that open standards create a “parallel Internet” that is crucial for the development of the broader Internet and benefits all regions [121-123].
MAJOR DISCUSSION POINT
Parallel Internet as a democratizing force
AGREED WITH
Sihao Huang, Michael Brown, Austin Marin
O
Owen Lauder
6 arguments212 words per minute892 words251 seconds
Argument 1
Open, Interoperable AI Agent Standards – Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder)
EXPLANATION
Owen stresses that for agents to work together effectively, a standardized way of sharing identity, capabilities, and security requirements is required. Interoperability is the foundation of a functional agentic economy.
EVIDENCE
He described the agent-to-agent standard that includes a digitized clipboard sharing ID, capabilities, goals, data handling, and security metadata, which he says is fundamental to greasing the wheels of the agentic economy [64-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of interoperable agent-to-agent communication is underscored by the description of Google’s agent-to-agent protocol as a foundational standard [S14] and by broader remarks on interoperability <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Need for interoperable agent‑to‑agent communication
AGREED WITH
Sihao Huang, Michael Sellitto, Michael Brown, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – Agent‑to‑Agent protocol shares ID, capabilities, and security metadata; UCP enables agent‑website/payment interactions (Owen Lauder)
EXPLANATION
Owen outlines two protocols: the agent‑to‑agent protocol that conveys essential metadata for agent interaction, and the Universal Commerce Protocol (UCP) that lets agents interact with websites and payment systems, enabling commerce.
EVIDENCE
He explained the agent-to-agent protocol’s metadata fields (ID, capabilities, security) [68-73] and introduced the Universal Commerce Protocol (UCP) for agent-website and payment interactions, describing its transformative potential for business [74-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The agent-to-agent protocol that conveys ID, capabilities, and security metadata is identified as an early standard for vendor-agnostic interactions [S14]; the commerce-related aspects align with the broader discussion of AI commerce protocols.
MAJOR DISCUSSION POINT
Agent‑to‑agent and commerce protocols
Argument 3
Government and International Coordination – Collaboration with global partners (e.g., Walmart, Flipkart, Infosys) through shared standards (Owen Lauder)
EXPLANATION
Owen highlights that Google DeepMind is partnering with global retailers and technology firms, demonstrating how shared standards enable cross‑border collaboration and commerce.
EVIDENCE
He mentioned partnerships with Walmart and Target in the U.S., as well as Flipkart and Infosys in India, facilitated by shared agent standards [77-78].
MAJOR DISCUSSION POINT
Global commercial partnerships via standards
Argument 4
Historical Lessons on Standards – Early skepticism about online credit‑card use was overcome by HTTPS, unlocking the modern digital economy (Owen Lauder)
EXPLANATION
Owen reflects on the early belief that putting credit‑card information online was unsafe, and how the adoption of HTTPS changed that perception, enabling secure e‑commerce.
EVIDENCE
He recalled the 1990s mindset that credit-card data should never be online and how the secure HTTPS layer later unlocked the modern digital economy [235-238].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformation of e-commerce through HTTPS is cited as a historic lesson for AI security standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and reinforced by the emphasis on trust for digital transactions [S16].
MAJOR DISCUSSION POINT
HTTPS as a turning point for e‑commerce
AGREED WITH
Sihao Huang, Michael Sellitto
Argument 5
Historical Lessons on Standards – Electrical standards (volts, amps) enabled safe, universal power grid integration (Owen Lauder)
EXPLANATION
Owen points out that standardizing electrical units allowed devices to safely connect to power grids worldwide, illustrating how technical standards facilitate global interoperability.
EVIDENCE
He described how standardizing units like volts, amps, and ohms enabled power supplies to connect to the grid and allowed inventions such as fuses to protect circuits [239-242].
MAJOR DISCUSSION POINT
Electrical standards as a model for AI standards
Argument 6
Security, Trust, and Audibility – Security requirements are embedded in agent‑to‑agent metadata and are fundamental to the agentic economy (Owen Lauder)
EXPLANATION
Owen asserts that security metadata—such as authentication and data handling requirements—must be part of the agent‑to‑agent exchange to ensure safe and trustworthy interactions within the agentic economy.
EVIDENCE
He listed security requirements as part of the agent-to-agent metadata (e.g., security requirements field) and emphasized its fundamental role for the agentic economy [71-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding security metadata within agent-to-agent exchanges is highlighted as a core component of emerging AI agent standards [S14] and the broader security framework discussed in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Embedding security in agent metadata
AGREED WITH
Sihao Huang, Austin Marin, Michael Sellitto, Michael Brown, Wifredo Fernandez
M
Michael Sellitto
4 arguments183 words per minute1123 words366 seconds
Argument 1
Open, Interoperable AI Agent Standards – MCP provides a universal way for models to connect to data and tools (Michael Sellitto)
EXPLANATION
Michael describes the Model Context Protocol (MCP) as a universal, open standard that lets AI models access enterprise knowledge bases, government data, and other tools in a consistent manner, simplifying integration.
EVIDENCE
He explained that MCP connects AI models to enterprise knowledge bases and government data sources by providing a rough description of the data source and tools, enabling intuitive access similar to human users [28-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Model Context Protocol (MCP) is presented as a universal, open standard for linking AI models to data and tools in the early AI standards landscape [S14].
MAJOR DISCUSSION POINT
MCP as universal data‑tool connector
AGREED WITH
Sihao Huang, Owen Lauder, Michael Brown, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – MCP links AI agents to enterprise data sources; Skills protocol enables reusable task instructions (Michael Sellitto)
EXPLANATION
Michael expands on MCP’s role in linking agents to data and introduces the Skills protocol, which encodes reusable task instructions that can be transferred across models and vendors.
EVIDENCE
He detailed MCP’s function of describing data sources and tools for model access [28-36] and described Skills as a set of instructions that teach agents tasks, allowing portability across providers [39-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
MCP’s role in describing data sources for model access is documented in the discussion of emerging protocols [S14]; the concept of reusable task instructions aligns with the broader push for portable AI capabilities.
MAJOR DISCUSSION POINT
MCP and Skills for data access and task portability
Argument 3
Security, Trust, and Audibility – Authentication, auditability, and user‑confirmation are required before agents take significant actions (Michael Sellitto)
EXPLANATION
Michael stresses that agents must incorporate authentication, audit trails, and mechanisms for user confirmation before performing high‑impact actions, ensuring accountability and trust.
EVIDENCE
He stated that agents need security, authentication, and the ability to check back with the user before making significant decisions, and that this information should be intelligible to customers [227-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for authentication, audit trails, and user confirmation in AI agent actions is emphasized as a security imperative for trustworthy AI [S17] and reflected in the panel’s security focus <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Authentication and auditability for agent actions
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Michael Brown, Wifredo Fernandez
Argument 4
Historical Lessons on Standards – Automotive standards (fuel economy, crash tests) provide an analogy for how standardized metrics build consumer confidence (Michael Sellitto)
EXPLANATION
Michael uses the automobile industry’s standardized metrics—such as fuel economy ratings and crash‑test results—to illustrate how consistent, third‑party metrics give consumers confidence in products, a model applicable to AI standards.
EVIDENCE
He compared AI standards to car industry metrics, describing how standardized fuel-economy and crash-test data provide trust and allow consumers to make informed choices [212-219].
MAJOR DISCUSSION POINT
Analogy of automotive standards for AI trust
AGREED WITH
Sihao Huang, Owen Lauder
M
Michael Brown
5 arguments163 words per minute631 words232 seconds
Argument 1
Open, Interoperable AI Agent Standards – Shared “traffic‑light” style standards create consistent, secure AI worldwide (Michael Brown)
EXPLANATION
Michael likens AI standards to universal traffic‑light signals, arguing that shared conventions (red means stop, green means go) provide a common understanding that ensures consistent and secure AI behavior across nations.
EVIDENCE
He used the traffic-light analogy, noting that red universally means stop and green means go, and argued that such shared understanding enables secure, interoperable AI worldwide [86-90].
MAJOR DISCUSSION POINT
Traffic‑light analogy for global AI standards
AGREED WITH
Sihao Huang, Michael Sellitto, Owen Lauder, Wifredo Fernandez, Austin Marin
Argument 2
Specific Protocols and Their Functions – OpenAI’s commerce protocol lets agents autonomously book travel, shop, etc. (Michael Brown)
EXPLANATION
Michael describes OpenAI’s commerce protocol, which enables agents to act on behalf of users to arrange travel, secure flights, book hotels, and perform other e‑commerce activities autonomously.
EVIDENCE
He gave the example of an agent knowing a family wants to vacation in Goa and then automatically securing flights and hotels using the commerce protocol [98-100].
MAJOR DISCUSSION POINT
Autonomous commerce via OpenAI protocol
Argument 3
Security, Trust, and Audibility – Trust in agents handling personal/financial data is a prerequisite for widespread adoption (Michael Brown)
EXPLANATION
Michael argues that for AI agents to be widely adopted, users must trust that agents can securely manage personal and financial information, which requires robust security and transparency mechanisms.
EVIDENCE
He linked shared understanding (traffic-light analogy) to security and trust, stating that trust is needed for agents handling personal/financial data before broad adoption can occur [91-92] and later emphasized the need for authentication and auditability before agents take significant actions [226-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust and security for personal and financial data are identified as critical for AI adoption, mirroring the security-trust requirements for e-commerce [S16] and the broader call for AI security standards [S17].
MAJOR DISCUSSION POINT
Trust as a prerequisite for AI adoption
AGREED WITH
Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Wifredo Fernandez
Argument 4
Government and International Coordination – Government can convene industries to set norms, but industry leads the technical work (Michael Brown)
EXPLANATION
Michael notes that while governments can play a convening role to bring together different sectors, the technical development of standards should be driven by industry experts who understand user needs.
EVIDENCE
He stated that government can bring industries together for dialogue, but the industry must remain front-center in establishing technical norms because they are the practitioners and experts [252-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses a government-led convening role while keeping technical standard development industry-driven, consistent with the voluntary, consensus-based approach described in <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and the NIST tradition [S2].
MAJOR DISCUSSION POINT
Industry‑led technical standards with government facilitation
Argument 5
Global Impact and Democratization – Shared protocols let agents serve users worldwide, exemplified by autonomous travel booking (Michael Brown)
EXPLANATION
Michael highlights that shared AI protocols enable agents from different companies and countries to collaborate seamlessly, illustrated by agents autonomously arranging travel for users across borders.
EVIDENCE
He described an agent that knows a user wants to travel to Goa and can automatically secure flights and hotels, demonstrating cross-border, shared-protocol functionality [98-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-border agent services enabled by shared protocols are highlighted as a benefit of open standards for global interoperability <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1] and the discussion of builders in diverse regions [S13].
MAJOR DISCUSSION POINT
Cross‑border agent services via shared protocols
Agreements
Agreement Points
Open, interoperable AI agent standards are essential to avoid vendor lock‑in and enable global interoperability.
Speakers: Sihao Huang, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez, Austin Marin
Open, Interoperable AI Agent Standards – Open standards prevent lock‑in and enable global interoperability (Sihao Huang) Open, Interoperable AI Agent Standards – MCP provides a universal way for models to connect to data and tools (Michael Sellitto) Open, Interoperable AI Agent Standards – Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder) Open, Interoperable AI Agent Standards – Shared “traffic‑light” style standards create consistent, secure AI worldwide (Michael Brown) Open, Interoperable AI Agent Standards – Open standards create a “parallel Internet” that fuels innovation and competition (Wifredo Fernandez) Open, Interoperable AI Agent Standards – Voluntary, consensus‑based standards avoid duplicated government requests (Austin Marin)
All speakers emphasized that open, interoperable standards-whether MCP, agent-to-agent, commerce protocols, or broader voluntary frameworks-prevent lock-in, allow builders in any country to adopt or switch models, and create a shared layer akin to the Internet [186-190][191-199][28-36][39-47][64-73][86-92][121-123][138-145][146-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for open, interoperable AI standards echo the role of digital standards in providing guardrails for trustworthy AI development and preventing lock-in, as highlighted by international standards bodies [S45] and multistakeholder cooperation initiatives [S39].
Security, trust, and auditability are critical for AI agents, analogous to SSL/HTTPS for e‑commerce.
Speakers: Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Security, Trust, and Audibility – Historical SSL/HTTPS security standards enabled e‑commerce (Sihao Huang) Security, Trust, and Audibility – Request for information on AI agent security; draft standards on identity and authorization (Austin Marin) Security, Trust, and Audibility – Authentication, auditability, and user‑confirmation are required before agents take significant actions (Michael Sellitto) Security, Trust, and Audibility – Security requirements are embedded in agent‑to‑agent metadata and are fundamental to the agentic economy (Owen Lauder) Security, Trust, and Audibility – Trust in agents handling personal/financial data is a prerequisite for widespread adoption (Michael Brown) Security, Trust, and Audibility – Standards must embed privacy‑preserving, auditability, authenticity, and consent mechanisms (Wifredo Fernandez)
The panel repeatedly linked the need for robust security, authentication, audit trails and privacy safeguards to the trust required for AI agents, drawing parallels to SSL/HTTPS and emphasizing upcoming RFI, draft identity standards, and protocol-level security fields [206-207][155-165][227-229][71-73][91-92][226-229][264-268].
POLICY CONTEXT (KNOWLEDGE BASE)
Security, trust and auditability are repeatedly emphasized as foundational for AI agents, mirroring requirements for transparency and traceability in UN discussions [S27], e-commerce security in ECOWAS [S28], and the need for robust security layers to build trust in agentic AI [S29][S30].
Historical standards (Internet, electrical, automotive) provide lessons for designing AI standards.
Speakers: Sihao Huang, Owen Lauder, Michael Sellitto
Historical Lessons on Standards – Internet protocols (TCP/IP, HTTPS) illustrate how open standards drive global adoption and prosperity (Sihao Huang) Historical Lessons on Standards – Early skepticism about online credit‑card use was overcome by HTTPS, unlocking the modern digital economy (Owen Lauder) Historical Lessons on Standards – Automotive standards (fuel economy, crash tests) provide an analogy for how standardized metrics build consumer confidence (Michael Sellitto)
All three speakers cited past standard-setting successes-Internet protocols, HTTPS for e-commerce, electrical units, and automotive safety metrics-to argue that similar open, technically robust standards can guide AI development [198-200][235-242][212-219].
Government and international coordination are essential for effective AI standards development.
Speakers: Austin Marin, Sihao Huang, Michael Brown, Owen Lauder
Government and International Coordination – The Center serves as the industry front door; NIST provides voluntary consensus processes; participation in International Network (Austin Marin) Government and International Coordination – U.S. government’s historic role in Internet standards guides export of an open AI stack (Sihao Huang) Government and International Coordination – Government can convene industries to set norms, but industry leads technical work (Michael Brown) Government and International Coordination – Collaboration with global partners (e.g., Walmart, Flipkart, Infosys) demonstrates cross‑border coordination via standards (Owen Lauder)
Speakers highlighted the Center’s role as a front-door liaison, the legacy of U.S. government support for open standards, the need for government convening while industry drives technical details, and examples of global partnerships enabled by standards [132-145][276-280][191-199][252-254][77-78].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports stress the necessity of coordinated government and international effort to bridge technical standards with policy, citing multistakeholder cooperation [S39], divergent views on the scope of government involvement [S40], and the need for global coordination mechanisms among standards bodies [S42][S43].
Open standards democratize AI, allowing builders worldwide to create and exchange services across borders.
Speakers: Sihao Huang, Michael Brown, Wifredo Fernandez, Austin Marin
Global Impact and Democratization – Standards allow builders in India, Kenya, and elsewhere to use and switch AI models freely (Sihao Huang) Global Impact and Democratization – Shared protocols let agents serve users worldwide, e.g., autonomous travel booking (Michael Brown) Global Impact and Democratization – Open standards foster competition, innovation, and a “parallel Internet” that benefits all regions (Wifredo Fernandez) Government and International Coordination – International Network for Advanced AI Measurement, Evaluation, and Science facilitates global engagement (Austin Marin)
All agreed that open, interoperable protocols enable developers in emerging markets to build on AI services, support cross-border commerce, and create a new “parallel Internet” that benefits the global digital economy [186-190][98-100][91-92][121-123][276-280].
POLICY CONTEXT (KNOWLEDGE BASE)
Open digital standards are portrayed as enablers of democratized AI development and cross-border service exchange, providing the guardrails for responsible AI while fostering inclusive participation [S45] and supporting international multistakeholder collaboration [S39].
Similar Viewpoints
Both stress that open, voluntary, consensus‑based standards are the best way to prevent lock‑in and reduce bureaucratic duplication for industry [191-199][138-145][146-152].
Speakers: Sihao Huang, Austin Marin
Open, Interoperable AI Agent Standards – Open standards prevent lock‑in and enable global interoperability (Sihao Huang) Open, Interoperable AI Agent Standards – Voluntary, consensus‑based standards avoid duplicated government requests (Austin Marin)
Both argue that protocols must embed clear metadata (data source description, capabilities, security) to make agents interoperable across platforms [28-36][64-73].
Speakers: Michael Sellitto, Owen Lauder
Open, Interoperable AI Agent Standards – MCP provides a universal way for models to connect to data and tools (Michael Sellitto) Open, Interoperable AI Agent Standards – Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder)
Both use the HTTPS/SSL story to illustrate how security standards unlock commerce and trust in digital systems [206-207][235-238].
Speakers: Michael Brown, Owen Lauder
Security, Trust, and Audibility – Historical SSL/HTTPS security standards enabled e‑commerce (Sihao Huang) Historical Lessons on Standards – Early skepticism about online credit‑card use was overcome by HTTPS, unlocking the modern digital economy (Owen Lauder)
Both see a complementary role where government convenes and coordinates, while industry drives the technical development of standards [132-145][252-254].
Speakers: Austin Marin, Michael Brown
Government and International Coordination – The Center serves as the industry front door; NIST provides voluntary consensus processes (Austin Marin) Government and International Coordination – Government can convene industries to set norms, but industry leads technical work (Michael Brown)
Unexpected Consensus
Positive references to the Trump administration from both a U.S. government official and an industry representative.
Speakers: Sihao Huang, Michael Sellitto
Government and International Coordination – U.S. government’s historic role … (Sihao Huang) (includes mention of the Trump administration) Open, Interoperable AI Agent Standards – MCP provides a universal way … (Michael Sellitto) (mentions partnership with the Trump administration)
It is uncommon for a technical standards discussion to contain explicit praise of a specific administration; both Sihao and Michael Sellitto highlighted the Trump administration’s role in supporting AI standards and partnerships [190][27].
POLICY CONTEXT (KNOWLEDGE BASE)
Both a U.S. government official’s remarks at the World Economic Forum and industry commentary highlighted favorable views of the Trump administration’s leadership and policy direction [S31][S32].
Overall Assessment

The panel displayed strong convergence on four main themes: the necessity of open, interoperable AI standards; the critical role of security, trust, and auditability; the value of historical standard‑setting lessons; and the importance of coordinated government‑industry and international collaboration to democratize AI worldwide.

High consensus – the repeated alignment across all speakers suggests a shared vision that will likely translate into coordinated policy initiatives, industry road‑maps, and international cooperation, accelerating the development of a secure, open AI ecosystem.

Differences
Different Viewpoints
Governance model for AI standards – government‑led coordination vs industry‑led technical development
Speakers: Sihao Huang, Austin Marin, Michael Brown
Open standards should be driven by U.S. government initiatives to export an open AI stack globally (Sihao Huang) The Center for AI Standards and Innovation acts as the front‑door for industry, coordinating to avoid duplicated agency requests (Austin Marin) Government can convene but industry must remain front‑center in establishing technical norms (Michael Brown)
Sihao and Austin argue that a strong U.S. government role is essential to shape and export open AI standards, positioning the government as the primary driver and coordinator of the ecosystem [191-199][138-145]. Michael Brown acknowledges a governmental convening role but stresses that the technical work and norm-setting must be led by industry itself, suggesting a more limited governmental influence [252-254]. This creates a tension between a government-centric versus industry-centric approach to standard development.
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over whether AI standards should be driven by government coordination or industry consensus is reflected in analyses of multistakeholder cooperation [S39], differing expert positions on the extent of governmental oversight [S40], and industry-preferred consensus approaches [S44].
Prioritisation of security mechanisms in AI agent standards
Speakers: Sihao Huang, Michael Sellitto, Owen Lauder, Wifredo Fernandez
Security standards analogous to SSL/HTTPS are needed to enable trustworthy AI commerce (Sihao Huang) Authentication, auditability and user‑confirmation before significant actions are essential (Michael Sellitto) Security requirements should be embedded as metadata in agent‑to‑agent exchanges (Owen Lauder) Standards must embed privacy‑preserving, auditability, authenticity and consent mechanisms (Wifredo Fernandez)
All speakers agree security is critical, but they emphasise different components: Sihao draws a historical analogy to SSL/HTTPS as a foundation for e-commerce [206-207]; Michael Sellitto focuses on authentication, audit trails and user confirmation before high-impact actions [227-229]; Owen proposes that security metadata be part of the agent-to-agent protocol itself [71-73]; Wifredo stresses broader privacy, auditability and consent principles [264-268]. The divergence lies in which security layer should be prioritised and how it should be operationalised.
POLICY CONTEXT (KNOWLEDGE BASE)
Security mechanisms are identified as a foundational layer for trust in AI agents, with calls to prioritize them in standards development [S29][S30].
Scope of regulation – whether agent‑driven social media platforms should be specifically regulated
Speakers: Wifredo Fernandez, Other panelists (implicit disagreement)
Regulating social media platforms that are agent‑driven raises novel policy questions (Wifredo Fernandez) No other speaker directly addressed regulation of agent‑driven social media, focusing instead on standards and interoperability
Wifredo raises the issue of regulating agent-driven social media platforms as a new challenge [119-121]. The rest of the panel does not engage with this regulatory angle, concentrating on technical standards and industry coordination, indicating an unexpected gap in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight the open question of regulating agent-driven social media, with U.S. AI standards reports noting the need to determine a regulatory approach [S35] and broader calls for social media platform regulation to address misinformation and safety concerns [S36][S37][S38].
International versus U.S.–centric approach to AI standards deployment
Speakers: Sihao Huang, Austin Marin
The U.S. should leverage its historic role to export an open AI stack worldwide (Sihao Huang) Engagement is pursued through a multilateral International Network for Advanced AI Measurement, Evaluation, and Science (Austin Marin)
Sihao emphasizes a U.S.-led export model based on historical Internet standards to spread open AI globally [191-199]. Austin, while acknowledging U.S. leadership, highlights participation in a ten-country international network to develop consensus-based standards, suggesting a more collaborative multilateral approach [276-280]. This reflects differing views on the balance between national leadership and international cooperation.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between international multistakeholder coordination and a U.S.-centric deployment strategy is noted in global cooperation analyses [S39] and regional AI governance initiatives that stress inclusive, cross-regional policy making [S41].
Unexpected Differences
Regulation of agent‑driven social media platforms
Speakers: Wifredo Fernandez, Other panelists
Raises novel regulatory questions about agent‑driven social media (Wifredo Fernandez) No other panelist addresses this regulatory dimension, focusing on standards and interoperability
While the panel largely concentrates on technical standards and industry coordination, Wifredo uniquely brings up the need to consider regulatory frameworks for agent‑driven social media, a topic not reflected elsewhere in the discussion, indicating an unexpected divergence in focus.
POLICY CONTEXT (KNOWLEDGE BASE)
The broader issue of regulating platforms powered by AI agents is addressed in U.S. AI standards deliberations [S35] and in discussions of platform regulation frameworks for the Global South and beyond [S46].
Overall Assessment

The panel shows strong consensus on the importance of open, interoperable AI standards to drive global innovation and avoid lock‑in. The primary disagreements revolve around who should lead the standard‑setting process (government versus industry), the prioritisation of specific security components, the balance between U.S. leadership and multilateral cooperation, and the emerging question of regulating agent‑driven social media.

Moderate – while all participants share the overarching vision of open standards, the differing views on governance structures, security priorities, and regulatory scope suggest that coordination will require careful negotiation. These tensions could affect the speed and inclusivity of standard adoption, especially across jurisdictions and sectors.

Partial Agreements
All speakers concur that open, interoperable standards are a shared goal that will foster innovation, prevent vendor lock‑in and support global AI development. However, they diverge on the mechanisms: Sihao and Austin focus on government‑facilitated consensus processes; Michael Sellitto highlights a specific technical protocol (MCP); Owen stresses metadata‑rich agent‑to‑agent standards; Michael Brown uses a metaphor for universal conventions; Wifredo frames it as a new ecosystem layer. The consensus on the goal contrasts with varied pathways to achieve it.
Speakers: Sihao Huang, Austin Marin, Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Open, interoperable AI agent standards are essential to avoid lock‑in and enable global collaboration (Sihao Huang) Voluntary, consensus‑based standards reduce duplicated government requests and streamline industry engagement (Austin Marin) MCP provides a universal way for models to connect to data and tools (Michael Sellitto) Interoperability is essential for reliable agent‑to‑agent communication (Owen Lauder) Shared “traffic‑light” style standards create consistent, secure AI worldwide (Michael Brown) Open standards create a “parallel Internet” that fuels innovation (Wifredo Fernandez)
Takeaways
Key takeaways
Open, interoperable AI agent standards are essential to avoid vendor lock‑in and to enable global collaboration (e.g., MCP, Agent‑to‑Agent, UCP, Skills). Voluntary, consensus‑based processes led by NIST and the Center for AI Standards and Innovation are preferred over prescriptive regulation. Security, authentication, auditability, privacy, and user consent must be baked into agent protocols before widespread adoption. Historical precedents (TCP/IP, HTTPS, automotive and electrical standards) illustrate how open standards drive innovation, commerce, and trust. International coordination (through the International Network for Advanced AI Measurement, Evaluation, and Science) is underway to align measurement, evaluation, and security practices. Sector‑specific challenges (e.g., PII in education, healthcare, finance) need targeted guidance and benchmarks. Industry participants (Anthropic, Google DeepMind, OpenAI, XAI) are actively developing and sharing protocols, and see them as building blocks for a “parallel Internet” of AI services.
Resolutions and action items
Submit comments to the Center for AI Standards and Innovation’s Request for Information on AI agent security (deadline March). Review and comment on NIST ITL’s draft standards for AI agent identity and authorization. Participate in upcoming sector‑specific listening sessions (education, healthcare, finance) planned for April. Continue collaborative development of open protocols (MCP, Skills, Agent‑to‑Agent, UCP, commerce protocols) and share implementations across companies. Engage with international partners via the International Network for Advanced AI Measurement, Evaluation, and Science to harmonize measurement and evaluation methods.
Unresolved issues
How to create a unified, cross‑company standard for agent‑to‑agent communication without fragmenting the ecosystem. Specific mechanisms for handling personally identifiable information (PII) and ensuring compliance across diverse regulatory regimes. Regulatory approach for agent‑driven social media platforms and other novel use‑cases. Balancing openness with security—determining the right level of mandatory authentication and user‑confirmation for high‑impact actions. Potential overlap or competition among similar protocols (e.g., OpenAI’s commerce protocol vs. Google’s UCP).
Suggested compromises
Adopt voluntary, industry‑driven standards while allowing government to act as a convening and coordination body, avoiding duplicate agency requests. Emphasize open standards to preserve interoperability and competition, but embed security and privacy requirements to satisfy adoption concerns. Allow multiple protocol implementations to coexist (e.g., different commerce protocols) as long as they adhere to shared security and interoperability guidelines.
Thought Provoking Comments
MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use… you just need to give the model a rough description of what’s in the data source and what kind of tools it can access, and the model will intuitively know how to use those data sources.
Introduces a concrete, vendor‑agnostic protocol that solves the painful bespoke integration problem, highlighting how open standards can unlock interoperability and data portability across enterprises and governments.
Set the technical baseline for the discussion, prompting other panelists to reference their own protocols (agent‑to‑agent, commerce) and framing the rest of the conversation around how such standards can be adopted globally.
Speaker: Michael Sellitto (Anthropic)
Our agent‑to‑agent standard is essentially a digitized clipboard of information that an agent will share with another agent – ID, capabilities, intent, data needs, and security requirements.
Provides a vivid metaphor for how agents can communicate securely and efficiently, moving the conversation from abstract standards to a tangible mechanism for an emerging ‘agent economy.’
Shifted the dialogue toward inter‑agent communication and commerce, leading other speakers (e.g., Michael Brown) to discuss cross‑company and cross‑border agent interactions.
Speaker: Owen Lauder (Google DeepMind)
Having a shared understanding in countries, rich and poor, advanced and developing, around how things work… like traffic lights – it lets builders know that what they’re building will be secure, accessible, and useful everywhere.
Uses a simple, universal analogy to illustrate why global standards matter, linking technical interoperability to everyday safety and trust, and emphasizing the societal dimension of standards.
Reframed the technical discussion as a matter of global public good, prompting Sihao and Austin to connect AI standards to historical internet standards and to discuss policy implications.
Speaker: Michael Brown (OpenAI)
The Internet succeeded because the U.S. government supported open, decentralized protocols like TCP/IP and HTTPS. We must repeat that model for AI – open standards, not closed, nation‑centric systems.
Draws a powerful historical parallel, arguing against protectionist approaches and positioning open AI standards as a strategic economic and diplomatic tool.
Created a turning point toward policy‑focused dialogue, leading Austin to describe NIST’s voluntary consensus process and prompting the panel to consider security, sovereignty, and international cooperation.
Speaker: Sihao Huang (White House OSTP)
We’ve issued a Request for Information on AI agent security, have a draft on agent identity and authorization, and will hold sector‑specific listening sessions in education, healthcare, and finance to surface real‑world challenges.
Outlines concrete, actionable steps the government is taking, moving the conversation from abstract ideals to tangible engagement mechanisms with industry and stakeholders.
Steered the discussion toward next‑steps and collaboration pathways, encouraging participants to think about how their protocols can feed into NIST’s voluntary standards and sectoral pilots.
Speaker: Austin Marin (Center for AI Standards and Innovation, Dept. of Commerce)
When we think about agents buying music for us, we need privacy‑preserving, auditable, authentic, consent‑driven technology… agents should serve users and organizations, not just model developers.
Broadens the scope beyond technical interoperability to ethical and regulatory concerns, highlighting the need for standards that embed privacy, auditability, and consent.
Introduced a regulatory dimension that prompted others (e.g., Michael Sellitto’s security analogy) to discuss trust, authentication, and the role of standards in safeguarding user rights.
Speaker: Wifredo Fernandez (XAI)
Think of car metrics – fuel economy, crash test results – measured by independent, standardized third parties. That same confidence is needed for AI agents, especially around security and sovereignty.
Uses a relatable automobile safety analogy to explain why standardized, third‑party evaluated metrics are essential for trust in AI agents, linking security standards to market adoption and geopolitical concerns.
Deepened the security discussion, reinforcing Sihao’s point about SSL/HTTPS, and leading Owen and others to stress the importance of robust, interoperable security standards for the emerging AI economy.
Speaker: Michael Sellitto (Anthropic)
Overall Assessment

The discussion pivoted around a handful of high‑impact remarks that moved the panel from a generic overview of AI protocols to a nuanced, multi‑layered conversation about interoperability, security, global policy, and ethical governance. Michael Sellitto’s exposition of MCP and SKILLZ established the technical foundation, while Owen Lauder’s ‘digitized clipboard’ metaphor expanded the scope to inter‑agent commerce. Michael Brown’s traffic‑light analogy reframed standards as a universal safety language, prompting Sihao Huang to invoke the historic success of open internet protocols as a blueprint for AI. Austin Marin then translated these ideas into concrete government actions, and Wifredo Fernandez reminded the group of privacy and consent imperatives. Collectively, these comments created turning points that shifted the tone from descriptive to prescriptive, aligned industry and government perspectives, and highlighted the intertwined technical, security, and societal challenges that must be addressed through open, consensus‑driven standards.

Follow-up Questions
How do you see the future of AI standards and agent development, and how can AI agent standards reflect the same principles that enabled the open internet, including interoperability and security?
Guides the overall direction of standard‑setting to ensure openness, cross‑border compatibility, and trustworthy security, which are essential for a global AI ecosystem.
Speaker: Sihao Huang
How are your organizations engaging with the rest of the world on AI standards, and what are the most exciting applications developed on top of your standards and products?
Understanding international collaboration and real‑world use cases helps assess the effectiveness of standards and showcases tangible benefits for builders worldwide.
Speaker: Sihao Huang
What are the key AI agent security challenges that need to be addressed through voluntary standards and best practices?
Security is a prerequisite for widespread adoption; identifying threats and gaps is the first step toward creating robust, trusted standards.
Speaker: Austin Marin
How should AI agent identity and authorization be standardized to ensure trustworthy interactions?
A common framework for identity and authorization will enable secure agent‑to‑agent and agent‑to‑service communications across platforms.
Speaker: Austin Marin
What sector‑specific barriers (e.g., in education, healthcare, finance) hinder AI agent adoption, especially regarding handling of personally identifiable information (PII)?
Different industries face unique regulatory and technical constraints; pinpointing these informs targeted standards and accelerates deployment.
Speaker: Austin Marin
What standardized metrics and third‑party evaluation methods are needed to assess AI agent performance, safety, and security?
Objective, comparable metrics build confidence for buyers, facilitate model switching, and support transparent reporting of agent capabilities.
Speaker: Michael Sellitto
What lessons can be learned from standards that have failed or caused fragmentation (e.g., incompatible electrical plugs) to avoid similar pitfalls in AI standards?
Avoiding interoperability problems and market inefficiencies is crucial for seamless global AI commerce and integration.
Speaker: Owen Lauder
What should be the respective roles of government and industry in creating and governing AI standards, and how can they best collaborate?
Clarifying responsibilities ensures effective coordination, prevents duplicated regulatory burdens, and leverages expertise from both sectors.
Speaker: Michael Brown
How can AI agents incorporate privacy‑preserving, auditability, authenticity, and consent mechanisms into their design?
These principles protect users, satisfy regulatory expectations, and build trust in agent‑driven services.
Speaker: Wifredo Fernandez
How can international consensus on AI measurement, evaluation, and security methodologies be achieved through networks like the International Network for Advanced AI Measurement, Evaluation, and Science?
Harmonized evaluation standards enable comparable assessments across borders, fostering interoperability and shared confidence in AI systems.
Speaker: Austin Marin
Do agent commerce protocols across different companies operate competitively or cooperatively, and how can alignment be achieved?
Understanding the competitive vs cooperative dynamics informs how standards can be designed to promote interoperability without stifling innovation.
Speaker: Michael Brown

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Toward Collective Action_ Roundtable on Safe & Trusted AI

Toward Collective Action_ Roundtable on Safe & Trusted AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on defining safe and trusted AI for the African context, exploring what progress has been made on the continent, and identifying pathways for future collaboration. The panel included experts from various organizations working on AI governance, safety, and capacity building across Africa. Ambassador Philip Tigo emphasized that undesirable AI outcomes for Africa include systems that create dependency rather than building capacity, extract African data while concentrating value outside the continent, and perpetuate digital neocolonialism. Professor Jonathan Shock highlighted immediate risks like misinformation and disinformation campaigns, particularly around election periods, noting that AI enables malicious actors to conduct targeted campaigns at unprecedented scale.


Dr. Chinasa Okolo stressed the need for better documentation of AI harms occurring on the continent, citing examples of automated grading systems in African universities that caused problems for students. The panelists agreed that Africa needs to move beyond AI strategies to actual policies, with Ambassador Tigo noting that most African governments lack both comprehensive AI policies and the technical talent to implement them effectively. Mark Gaffley presented survey data showing that 75% of South Africans know very little about AI, highlighting the need for public education and awareness programs.


The discussion emphasized that Africans want empowerment and agency from AI systems, requiring models that understand local contexts, languages, and cultures. Panelists stressed the importance of collaboration over competition among African nations, with Professor Shock announcing the African Compute Initiative to share computational resources across universities. For deploying AI in critical infrastructure, the experts recommended careful procurement processes with safety benchmarks, maintaining non-AI alternatives, and building local capacity through AI safety institutes. The overall consensus was that Africa must develop its own AI capabilities while establishing guardrails to ensure these technologies serve African communities’ needs and values.


Keypoints

Major Discussion Points:

Defining AI risks and undesirable outcomes for Africa: The panel discussed Africa-specific concerns including digital neocolonialism, data extraction without local benefit, dependency rather than capacity building, misinformation/disinformation campaigns (especially targeting elections and female politicians), and systems built without African knowledge, wisdom, and cultural context.


What Africans want from AI systems: Panelists emphasized the need for empowerment and agency rather than dependency, equitable participation in AI development and governance, access to models for evaluation and research, and systems that understand local context, languages, and cultures. They stressed the importance of building local capacity and having alternatives to foreign-developed models.


Strengthening African cooperation and capacity: Discussion centered on moving from competition to collaboration across African countries, the need for actual AI policies (not just strategies), building technical talent and government fluency in AI, creating African AI safety institutes, and leveraging initiatives like compute sharing and grassroots organizations already doing important work.


Considerations for deploying AI in critical infrastructure: The panel addressed procurement guidelines that include safety benchmarks, maintaining human-in-the-loop systems for transparency, avoiding single-sourcing and maintaining alternatives (including analog options), the importance of local private sector partnerships, and ensuring governments have negotiation tools and capacity when dealing with big tech companies.


Addressing the digital divide and inclusion: Concerns were raised about ensuring AI advancement doesn’t widen existing gaps, with 64% of Africa lacking internet access, and the need to use AI to optimize development in areas like electricity and connectivity rather than just adopting AI for its own sake.


Overall Purpose:

The discussion aimed to explore what safe and trusted AI means specifically for African contexts, assess current progress on the continent, and identify promising pathways for collaboration. The panel sought to move beyond Western-centric AI safety discussions to focus on Africa-specific risks, needs, and solutions.


Overall Tone:

The discussion maintained a serious but constructive tone throughout, with participants showing both urgency about current challenges and optimism about potential solutions. There was notable frustration expressed about the gap between global AI development and African participation, but this was balanced by practical suggestions and examples of positive initiatives already underway. The tone became more collaborative and solution-focused as the discussion progressed, particularly when addressing cooperation and capacity-building opportunities.


Speakers

Speakers from the provided list:


Speaker 1: Event moderator/organizer, appears to be affiliated with AI Safety South Africa and involved in building local capacity for AI safety and evaluations research


Speaker 2: Co-moderator of the session (identified as Zach in the transcript)


Michelle Malonza: Co-moderator of the session, colleague of Speaker 2


Ambassador Philip Tigo: Special envoy on technology for the President of the Republic of Kenya


Professor Jonathan Shock: Associate professor in the Department of Mathematics and Applied Maths at UCT (University of Cape Town), Director of the UCT AI Initiative


Dr. Chinasa Okolo: Founder of Technicultura, Policy AI specialist at the UN Office for Digital and Emerging Technologies


Mark Gaffley: Director of legal and operations at the Center of Global AI Governance (GCG)


Audience: Multiple audience members who asked questions during the Q&A session


Additional speakers:


Marie-Ira Ducunda: Member of the research team (mentioned by Speaker 1 but did not speak in the transcript)


Gatoni: Member of the research team (mentioned by Speaker 1 but did not speak in the transcript)


Michel Malonza: Mentioned as co-moderator but appears to be the same person as Michelle Malonza


Dr. Kola Ideson: Research director at Research ICT Africa (mentioned as expected to join but did not appear to speak in the transcript)


Full session reportComprehensive analysis and detailed insights

This panel discussion brought together leading experts in AI governance, safety, and capacity building to explore what safe and trusted AI means specifically for African contexts, moving beyond Western-centric frameworks to address continent-specific challenges and opportunities. The conversation was structured around three key questions: What does safe and trusted AI actually mean for the African context? What progress has already been made on the continent and by whom? And what are the most promising pathways for collaborations going forward?


Defining AI Risks and Undesirable Outcomes for Africa

The discussion began with Ambassador Philip Tigo’s powerful reframing of AI safety concerns through an African lens. Rather than focusing on speculative future risks, he identified three critical areas of immediate concern. First, AI systems that create dependency rather than building local capacity represent a form of “digital neocolonialism” that erodes human agency—particularly problematic for a continent still working to build its aspirational capacity. Second, systems that extract African data whilst concentrating value outside the continent, leaving African institutions as mere implementers or users, perpetuate exploitative economic relationships. Third, AI systems built without incorporating African knowledge, wisdom, and cultures pose what he termed an “existential threat” that goes beyond undesirable to “unacceptable.”


Professor Jonathan Shock expanded on these immediate risks by highlighting the breakdown of social trust through misinformation and disinformation campaigns. He provided concrete evidence of how AI-enabled disinformation is already disrupting African democracies, particularly during election periods, with campaigns often targeting female politicians through technology-facilitated gender-based violence. Crucially, he noted that individual malicious actors can now design their own agents to carry out disinformation campaigns at unprecedented scale, moving beyond the traditional focus on big tech companies to include distributed threats.


Dr. Chinasa Okolo contributed a critical observation about the invisibility of African AI harms in global discourse. She noted that current AI incident databases, whilst comprehensive for other regions, fail to capture African contexts adequately—searching for “Africa” redirects to “African American” content. This documentation gap means that African governments lack the evidence base needed to craft appropriate regulations and hold responsible parties accountable for harms affecting their communities. She also highlighted specific examples of AI systems causing harm in African universities, including grading systems that disadvantage students and procurement of AI solutions that fail to function as promised.


The panellists reached a crucial consensus on redefining “existential risk” for African contexts. Ambassador Tigo argued passionately that whilst some scientists should study traditional existential risks like rogue AI systems, the real existential threats to Africa are immediate: threats to democracy, social harmony, and human agency. This reframing proved influential throughout the discussion, with other panellists consistently returning to immediate, contextually relevant risks rather than speculative future scenarios.


What Africans Want from AI Systems

The conversation revealed a sophisticated understanding of African aspirations for AI that centres on empowerment and agency rather than mere access. Professor Jonathan Shock articulated this as wanting AI that increases people’s range of possibilities and enables informed decision-making within local contexts. However, he emphasised that current AI systems cannot provide this empowerment because they lack understanding of local contexts, languages, and cultural nuances.


Dr. Chinasa Okolo highlighted two key desires emerging from her engagement with young Africans across the continent. First, there is strong demand for equitable participation in AI governance structures and development processes, driven partly by widespread underemployment and recognition that AI has world-changing potential. Second, African researchers, scientists, and engineers want opportunities to contribute new research that advances the field globally, particularly around understanding how AI impacts people from “different castes, tribes, religions, gender, and the intersection of all of these”—moving beyond Western constructs of bias that focus primarily on race.


Ambassador Tigo provided a persona-based analysis of what different African stakeholders need from AI. Scientists require access to AI models for evaluation and safety research, particularly crucial since African countries are among the biggest users of systems like ChatGPT. He noted a concerning trend where citizens are using culturally blind systems for emotional advice, highlighting the mismatch between available AI tools and local needs. Governments need capacity to hold AI companies accountable for potential harms whilst building negotiation capabilities to engage effectively with trillion-dollar companies. A key challenge he identified is that many governments think “AI is ChatGPT,” revealing a fluency problem in the public sector that hampers effective governance.


Mark Gaffley’s presentation of survey data provided sobering context for these aspirations. His research revealed that 75% of South Africans know very little about AI, with most learning through informal channels like social media and television. This finding suggests that African populations may be “some way away from being able to define what they want from AI” because many citizens are unaware the technology exists or understand its implications.


Strengthening African Cooperation and Capacity Building

The discussion revealed significant frustration with competitive approaches to AI development across African countries. Ambassador Tigo’s passionate intervention—”Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point”—became a turning point in the conversation. He argued that AI is fundamentally different from traditional ICT infrastructure development and requires a “collective all-in effort” rather than competition over who builds the best data centres.


Professor Jonathan Shock provided concrete examples of successful collaborative initiatives already underway, including Masakhane (focused on African language technologies), Deep Learning Indaba (machine learning capacity building), GOAI Africa (AI governance), and Sisonke Biotik (biotechnology applications). He announced the launch of the African Compute Initiative at the University of Cape Town, which will provide shared computational resources, cloud platforms, and state-of-the-art GPUs to researchers across African universities. This initiative exemplifies the network effects possible when institutions focus on empowering others rather than competing.


Dr. Chinasa Okolo highlighted international opportunities for African voices in global AI governance, noting strong African representation on the UN’s International Scientific Panel on AI that exceeded expectations. She emphasised how her work with the World Bank on continental and national AI strategies demonstrates the possibilities for diaspora engagement in African AI governance, particularly around government procurement issues where African governments are being “bombarded” by suppliers selling often-unnecessary AI solutions.


The panellists identified several key infrastructure needs for effective collaboration: moving from AI strategies (which exist across many African countries) to actual implementable policies, building technical talent and government fluency in AI, creating African AI safety institutes, and leveraging existing grassroots organisations. Mark Gaffley’s educational initiatives, including MOOCs with relatable African imagery and scholarship programmes prioritising African women, represent practical steps toward building the foundational knowledge needed for informed participation.


Considerations for AI Integration into Critical Infrastructure

The discussion of AI deployment in critical infrastructure revealed sophisticated understanding of the challenges facing African governments. Ambassador Tigo acknowledged that governments face immense pressure to adopt AI technologies because young populations are already using these tools extensively, leaving little room for rational choices about non-adoption.


The panellists identified several key principles for responsible AI integration. First, procurement processes should include safety benchmarks and audit requirements, taking advantage of companies’ desire for African markets to negotiate better terms. Ambassador Tigo suggested developing negotiation tools and procurement guidelines to help governments engage more effectively with major tech companies. Second, governments should avoid single-sourcing and maintain alternatives, including both local private sector options and analogue systems for those unable to access digital solutions. Third, continuous monitoring and agile mechanisms are essential because AI technology evolves rapidly, unlike traditional infrastructure purchases.


Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with government systems. He advocated for transparent, human-in-the-loop systems that maintain human agency in decision-making processes, noting the risks of losing institutional skills and becoming beholden to external companies. When discussing transparency, he acknowledged that whilst you can examine model weights, “it’s really difficult to tell what’s actually happening in there,” highlighting the practical limitations of technical transparency.


Dr. Chinasa Okolo emphasised the need for independent evaluation capacity through AI safety institutes, drawing parallels with the US National Institute of Standards and Technology whilst noting that African versions would need different designs aligned with local needs and values. She also stressed the importance of evaluating whether AI solutions are actually necessary, citing her World Bank research showing African governments being pressured to adopt solutions that often fail to deliver promised benefits.


Addressing Digital Exclusion and Development Priorities

A critical audience question highlighted that 64% of Africans lack internet access whilst AI development accelerates globally, raising concerns about AI advancement widening existing inequalities rather than promoting inclusion. This digital divide became a central theme in discussing development priorities.


Ambassador Tigo offered a strategic perspective on using AI to accelerate traditional development priorities rather than pursuing AI for its own sake. He provided examples from Kenya where AI is being used to optimise energy distribution and infrastructure development, arguing that African governments should “get AI for something else that drives development” rather than adopting AI for basic functions like chatbots.


Dr. Chinasa Okolo emphasised that simple, non-AI solutions often address development challenges more effectively than complex AI systems. She argued that building hospitals, paying teachers, and installing reliable electrical grids would solve many problems better than AI solutions, whilst also reducing opportunities for funds to be diverted or wasted on non-functional technologies.


Mark Gaffley provided a contrarian perspective, suggesting that maintaining analogue alternatives might preserve valuable human capabilities. He posed the philosophical question of whether the “digitally excluded” might retain cognitive abilities that become valuable as others experience dependency on AI systems, though he acknowledged this idea was “a bit out there.”


Pathways Forward and Ongoing Initiatives

The discussion identified several concrete action items and initiatives already underway. The African Compute Initiative represents a practical step toward shared computational resources, whilst educational programmes like Mark Gaffley’s MOOCs and scholarship initiatives address the foundational knowledge gap. The development of negotiation tools and procurement guidelines could help African governments engage more effectively with major tech companies.


Professor Shock highlighted the importance of existing collaborative networks and their potential for expansion. The success of initiatives like Masakhane and Deep Learning Indaba demonstrates that effective pan-African collaboration is already happening and can be scaled up for AI governance and safety work.


Dr. Chinasa Okolo’s work with international organisations like the World Bank and UN panels shows how African expertise can influence global AI governance whilst building capacity for local implementation. Her emphasis on moving from strategies to policies represents a crucial next step for many African countries.


However, significant challenges remain unresolved. The fundamental power imbalance between African governments and trillion-dollar tech companies requires innovative approaches that leverage market pressure rather than relying solely on regulatory mechanisms. The need for comprehensive AI incident databases that capture African contexts remains unfulfilled, limiting evidence-based policy development.


When discussing content provenance and watermarking AI-generated content, Professor Shock noted that “the cat is out of the bag” regarding detection technologies, suggesting that technical solutions alone cannot address misinformation challenges.


Conclusion

This discussion demonstrated the sophistication of African thinking about AI governance and safety, moving well beyond simplistic narratives of technological adoption or rejection. The panellists articulated a vision of AI development that prioritises African agency, contextual understanding, and collaborative approaches whilst remaining pragmatic about the pressures and opportunities facing the continent.


The conversation’s most significant contribution may be its reframing of AI safety from an African perspective, emphasising immediate threats to democracy and social cohesion over speculative future risks. This contextualised approach to AI governance offers valuable insights not only for African policymakers but for the global AI governance community seeking to understand how AI risks and benefits manifest differently across diverse contexts.


The emphasis on collaboration over competition, capacity building over dependency, and contextual understanding over universal solutions provides a framework for African AI development that could serve as a model for other regions seeking to assert agency in global AI governance. The concrete initiatives already underway—from the African Compute Initiative to collaborative research networks—demonstrate that this vision is moving from aspiration to implementation.


However, the significant challenges identified—from digital divides to power imbalances with tech companies—underscore the complexity of translating these principles into effective policies and practices. The path forward requires sustained effort across multiple fronts: building technical capacity, developing appropriate governance frameworks, fostering international collaboration, and ensuring that AI development serves African development priorities rather than becoming an end in itself.


Session transcriptComplete transcript of the session
Speaker 1

The first share of the research team, I believe, is here with us today, including Marie -Ira Ducunda. We have Gatoni as well, and Michel Malonza, who will also be moderating with us today. And then we’ve got AI Safety South Africa, where we’re working on building local capacity to work on AI safety alongside evaluations research. So together, our organization represents a growing ecosystem in African -led efforts on AI governance, safety, and capacity building. As you all must know, today we are exploring three interlinked questions. What does safe and trusted AI actually mean for the African context? What progress has already been made on the continent and by whom? And what are the most promising pathways for collaborations going forward?

And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on my left. who is the founder of Technicultura and a policy AI specialist at the UN Office for Digital and Emerging Technologies. And then we have Ambassador Philip Tigo that serves as a special envoy on technology for the President of the Republic of Kenya. And then we have Professor Jonathan Shock who is an associate professor in the Department of Mathematics and Applied Maths at UCT and the director of the UCT AI Initiative. And finally we also have Mark Gaffley who is the director of legal and operations at the Center of Global AI Governance. Hopefully we’ll also have Dr.

Kola Ideson that will join us in the next few minutes, who is the research director at Research ICT Africa. And in the next 47 minutes or so. We all spent about 30 minutes on the panel, followed by about 15 minutes for panel discussions. And then we’ll just conclude with some brief remarks to pull the threads together of what is discussed tonight. A few little housekeeping things before we start. So in the slide behind me, if you have not registered on NUMA, we’d love to stay connected and be in touch. And AI Safety South Africa and ELENA have exciting programs that you’d want to know about. So please scan this QR code on the top left of the screen.

With that link, you can leave us your contact details and also give us feedback on the event. And on the top right, you’ll see the link to Slido, which is the platform that we’ll use for Q &A. So you can just scan the code and then you’ll be redirected to a platform where you can leave your questions. And also avoid the questions of the two things we should prioritize. in the Q &A section. Okay, that’s all the points I had to share. So without further ado, let’s get into it. I’ll hand it over to you, Michelle. I believe Zach will be starting with the first couple of questions, then I’ll take over after him.

Speaker 2

Okay, thank you. So I’ll be moderating part of the session and my colleagues, Michelle, will be taking part of the questions. Afterward, then we’ll progress to the Q &A. So I will start with the foundation, Safe and Trusted AI, which is like we can consider broadly as kind of AI that delivers the outcome we want. So I want to start with you, Ambassador, please. In the context of Africa in particular, what AI -driving outcome will we consider undesirable?

Ambassador Philip Tigo

I think and it’s quite interesting I’ve been having this discussion of safety today the whole day I think in the context of Africa I think the first thing I want to be very careful is that the African continent is not homogenous right so I’ll give a very specific Kenyan understanding of this but I think it could potentially be something that is shared in the country I think the first part of this conversation is that largely that if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency especially for a continent that is still trying to aspire is a problem if AI systems are extractors of African data if capturing our African markets and there’s a concentration of value outside the continent while leaving our institutions as mere implementers or users then I think for me as I said it’s digital neocolonialism I think that’s it the second part of course is that if these continue to be built without our knowledge, wisdom, cultures, it creates an existential threat.

It’s almost a civilization extinction story that then for me is just not undesirable. I think it goes beyond, it’s unacceptable. So those would be my two quick responses.

Speaker 2

Okay, thank you. So, Prof Jonathan, I will move over to you. So of the possible outcome and risks and some of what Ambassador please mention, what do you see as a trade -off like short and long -term risks? And which one shall kind of like likely kind of like consider now and those that we can consider in the future?

Professor Jonathan Shock

Sure, thank you very much for the question. So I agree with Ambassador Tigo in terms of these ideas of neocolonialism. And the bias is inherent in the models and the context. I think these things are all extremely important. And I think these things are all extremely important. And I think these things are all extremely important. I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we we have to be very aware of, which is happening right now. In fact, it happened before AI came along.

And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real risk of a complete breakdown in trust. And that’s misinformation and disinformation. We’re seeing already around times of elections within Africa, within Ghana, within South Africa, within Nigeria, that misinformation, but also disinformation, and I disambiguate those by misinformation being, it might be that people are spreading things that they just don’t know is correct, but disinformation is really targeted campaigns. And what we’re seeing is that those targeted campaigns are often gendered, that it’s often against female politicians, that technology -facilitated gender -based violence is a massive issue against politicians, but more broadly. But I think that for me, one of the real things…

is the breakdown in trust that we’re seeing in society. We’ve seen already with social media how echo chambers form. AI is really allowing that to happen at scale by malicious actors who can focus in on particular election periods and destabilize what’s happening. To me, in the short term, that’s really worrying. I think it’s quite difficult to talk about the long term. We can think about what might happen in the next few months, but thinking about the long term threats, people have talked about existential threats in terms of AI getting out of control. I think that’s something that’s extremely important to study, but I think that within particular contexts there are things that are real that are happening now that we have to worry about and try to mitigate.

I think that’s really important. The other thing that I think is happening at the moment that I don’t hear a lot of people within the space talk about, within the policy space maybe talk about, is the issue of agents. And the fact that now a single malicious actor can design their own agent to carry out a misinformation campaign or a disinformation campaign. I think just over the last few months, we’ve seen that possibility come to light. And I think that’s a real worry and something that we need to understand. It’s not just now about the big tech firms. Of course, they have a major role to play in this. But I think now an individual actor can produce software that millions

Speaker 2

Okay, thank you. So, Dr. Chinasa, I’ll move over to you. So, given that the kind of current development of frontier AI leaks that is kind of forcing some of the leaks we are talking about, how can Africa monitor and mitigate those leaks, given that they are kind of like most of the existing development is outside of the context?

Dr. Chinasa Okolo

Yeah, great question. And this reminds me, I actually talked to an Alita researcher last year when I was at the… The peers at AI… Action Summit about some work that they were interested on doing, like an AI incident database. And I think this is actually very important because when I look at current databases, and they’re really comprehensive for the most part, but honestly, when I look up or type in Africa, for example, it reverts back to African American. And I’m based in the U .S., and that’s helpful for me to know, obviously, because I get coded as African American there, but finding this just basic information about AI harms on the continent is still very hard if you’re not tuned in.

I get stuff on Twitter that comes up all the time. There are a couple cases with some African universities, particularly in Nigeria, and also in South Africa had issues with AI being used to automatically grade standardized exams and students having issues with trying to rebut some of those scores that they received. And so that did not make mainstream news, probably in the countries, but not just generally. And so I think that this is a really important one. So we understand how AI impacts. It affects the African continent. and also communities on the continent, and then also that governments can respond accurately to crafting regulations that can serve the needs of communities and also ensure that the responsible parties are held responsible for the harms that they’re causing on different communities.

Speaker 2

Yeah, thank you. So just a kind of like follow -up on that. So you mentioned like holding kind of like responsible, kind of accountable. So like is there anything in particular like maybe our stakeholders can do, in that regard, kind of like is there any short -term or like long -term effort that we can do?

Dr. Chinasa Okolo

Yeah, it’s hard to say because, you know, as you can tell by my accent, I am American, I’m also Nigerian, and so I do understand a little bit of intricacies between both countries and the U .S. are a little bit more formal ways for advocacy. Like you can actually write directly to your congressman. You can call their office. Most often you won’t get them directly, but you’ll get their staff members, and they often respond. Like people. Right. them for basic issues like, oh, I can’t get my passport in time. Please help expedite this. And, oh, there’s this issue happening at my school. Please help with this. And so, honestly, I’m not very aware of similar pathways across African countries.

But I think that this civil society advocacy, particularly grouping together, you know, forming these coalitions can have a lot of power. It’s just, again, like there are a lot of incentives in place for governments to suppress this, and we’ve seen this turn into violence, particularly against youth. And so I am aware of this, and I don’t want to recommend this so people get harmed. But I think there are ways that, you know, again, this coalition voting can be successful.

Ambassador Philip Tigo

I wanted to jump into that because you talk about policy. And I think, and let’s be real, and that’s why I think I, when Irina asked me to come to this through my colleague Stephanie, I thought it would be important because this is a very Africa -centric discussion. I’ve been into all the global ones. I think five today. I think let’s be very clear. But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent. So there’s already no mechanism to do this. And that’s AI in general. We’re not even talking specific about safety. Secondly, we do not have necessarily the talent to do this in the continent.

I think that’s why what you guys are doing is important. And I say talent in the other spaces, not even in public sector. When you go into public sector, unfortunately my colleagues just think AI is charge EPT. Let’s be honest. So there’s basically a fluency question. Safety is so far in the scale that they’re not even thinking about it. So I think for me, the sense that I kind of have in this is that it needs to be an all -in effort. And this is where my sense in the African continent is where that dichotomy between civil society and governments disappears. Because if it’s about existential risk to the continent, and I say existential risk, it’s about existential risk in terms of harms to society.

I’m not talking about… I mean, a few scientists like us can talk about models and harms to the model. the chances of an AI pressing a nuclear button in Africa, come on. And that’s my point. So we have to even redefine what existential risk for Africa on AI means. And I think this is where we really have to break from that. And we can have a few of our scientists doing the existential risks models, models running rogues and science fiction. I think that’s important work. But the risk that he’s mentioned is real, right? Threats to democracy, threats to harmony of society are real risks. And then this is how then you begin to build guardrails from a point of understanding of what is really relevant to the African continent.

Otherwise, we get lost in the other conversation that really chances of happening, nil, but important. But these are the risks. Chances of happening, high, but less prioritized. Good data, folks.

Speaker 2

Okay. Thank you so much for that contribution. if you have a question, please use the QR codes and type your question. We’ll come back to that, but I’ll be moving over to my colleagues who will take over with the rest of the question. Michelle.

Michelle Malonza

Just to join the conversation that you’re already having, I think so far we’ve talked a lot about what we don’t want and the kind of risks that Africa should be focusing on versus the rest of the world, and now I’d like us to talk about how we define what we want the systems to look like and what trustful systems would look like. And so I’d like to start with Mark talking about what his work at GCG has revealed so far about what Africa should think about what they want from these systems.

Mark Gaffley

Cool. Thank you. Thank you for the question, Michelle, and obviously for the opportunity to speak this afternoon. I see the answer to this question as twofold. So first is the answer that addresses what we actually want to define from what we want from AI. As a high -level response, I would describe that as the desires of African citizens on the ground. especially our local communities and the marginalized and vulnerable amongst them who don’t necessarily have a voice or a seat at the decision -making table. The second response is the more likely scenario, in my view, that we remain subject to the whims, benevolent or otherwise, of those practitioners who are able to scale the most useful and not necessarily the most beneficial AI tools for our people.

Irrespective of whether those practitioners are based within national borders, across the broader continent or in foreign jurisdictions around the world. When I consider these responses in the context of GCG’s work, two things come to mind. The first is the results from a public awareness and perceptions of AI survey we released in September last year. The survey was a module in the annual South African Social Attitudes Survey, which is nationalized. The survey revealed that nearly 75 % of respondents knew very little about AI. And for those who did know about AI, most of their learning was through informal and unstructured channels, including through social media and television. These findings may reveal that African populations are some way away from being able to define what they want from AI, because quite simply the majority of citizens are unaware that technology even exists.

This drives the need for creating awareness and educating our peers on AI, so that when the time does come to interact with it, they can make informed and meaningful decisions about what they want. On this, GCG’s other work I’d like to highlight are the various short courses we run on ethical and human rights implications of artificial intelligence through accredited universities in South Africa. These courses attract interest from all over the world, and for each iteration we’ve received applications in the thousands. As part of these offerings, we are also prioritizing awarding scholarships, scholarships for African women as part of our Women in Focus series. Why this work is important to the question is that the courses, even if incrementally, are slowly moving the needle on the figure I mentioned earlier, equipping participants with the skills to pass on knowledge to their peers about the many benefits and risks related to AI technologies.

Finally, as a further effort towards equipping Africans to be able to define their own wants and needs, we have an online MOOC launching imminently that will offer our course content freely to the public using relatable caricatures and imagery, which I hope will further drive this objective of equipping Africans to understand and make their own informed decisions about what AI technologies to allow into their lives and what outcomes they want those tools to achieve for them.

Michelle Malonza

Thank you. I think that’s really interesting because it ties right to what Ambassador was saying, that in order to know what you want as Africans, you have to know that the technology exists, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what exact technology we are talking about when we see AI. So maybe I should let the rest of the panel… I don’t know if I… say what they think Africans want, and then we’ll go into

Professor Jonathan Shock

So I think, you know, I don’t want to speak to what an individual person wants, but I think that what we all want is empowerment. We all want agency. And so there is a possibility that we can think about AI as a way to give agency, and I spoke about agents before, and I mean agency in a slightly different sense, for people to understand the possibilities that they have. And to increase that range of possibilities so that people can make choices. And so knowing that there is something out there that can give you, empower you, is great, but it has to be able to empower you within a context. And, you know, we’ve spoken many times about the, you know, the lack of context, local context within these models, the lack of language, contextual language information.

And until those things have been fixed, it’s not actually going to empower people. So to me, it has to be about making sure that the model… understands local context, and then making sure that it’s actually giving people agency to make decisions. I think that’s really important.

Dr. Chinasa Okolo

Awesome. Yeah, so I’ll try to be a little bit nuanced about this because, again, I’m Nigerian -American. I grew up right in the middle of the United States. I have been fortunate to travel across the continent very frequently over the past couple years or so, but in going off of what Jonathan said, I would say that I do see really just an opportunity, one, to contribute to equitable governance structures and mechanisms, but also even just an opportunity to actually participate equitably in AI development more broadly. That’s what I see a lot of young Africans want, particularly one, because the epidemic of underemployment is very stark on the continent, and then also just generally that these systems have the power to change the world and have changed the world already, and so I think that this is something.

A lot of our conversations are on AI safety, can also provide new avenues for African researchers, scientists, engineers, to really contribute new research that we’re still missing. Because particularly when we consider the U .S. context or even these prominent AI safety or fairness conferences, a lot of the work on bias is rooted in race, for example. Again, which is a Western construct. And so if we understand how AI impacts people from different castes, from different tribes, religions, gender, and the intersection of all of these, I think this will, one, advance the field as a whole. But, again, also provide more opportunities for these governance structures that are needed within African context.

Ambassador Philip Tigo

Sorry. No, I think a couple of things. And I take this from a persona approach because, again, I think Africa and the communities are a little bit different. And I think I’ll take the three important ones. One, I think it’s basically our scientists, right? I think our scientists, for me, need us. because you cannot talk about benchmarks evaluations around safety if you don’t have access to these models because we are the ones who bear the brand of these models. I’ve given an example Kenya is the biggest user of charge GPT and the first user of charge GPT is emotional advice so you’re asking that’s real data so you’re asking a model for emotional advice that doesn’t understand your context so what does that mean so I think there has to be a way that our scientists have access to these models which means also capacity for them to be able to evaluate these models but also a way that then the second persona is governments, a way that then working with scientists that governments can hold those companies to account because of the potential adverse harms that they can do to our society and community so there is where I see hand in hand, now that’s what governments want but also what governments need is capacity because you’re talking to five trillion dollar companies and your GDP is like a hundred billion million dollars.

So I think potentially we have to, this is where there has to be collaboration because this company understands market pressure, not necessarily regulatory pressure. So there has to be a nuanced approach to how you do that. The third part, of course, I think is the citizenry, right? The citizenry, I think in my sense, just needs to be included. And part of inclusivity is the safety work, right? So you must be included in a safe environment so that you’re not left to put the whims of agents or folks who can manipulate the crowd. So I think I look at those three personas potentially as that. But I think the underlying kind of infrastructure in this is basically looking at how do we ensure that as a collective in the continent that we can build our own models.

And I think that’s important, right? Because you cannot over, part of agency is human agency but also part of challenge to agency is over reliance. On example, models. I think the continent, I understand local context, I understand culture, but that capability to be able to build our own models that are nuanced to our own context, I think is a good option. Then you are not left to Gemini, Quen, OpenAI, Anthropic, I can mention five of them. What choice do we have right now if we don’t have an alternative potentially built from open source?

Michelle Malonza

Thank you very much for all your responses. I really appreciate talking about how capacity and access are the ways that we are going to figure out agency and empowerment. I think that brings me to the next question that all of you have touched on about what is going to make it possible for us to strengthen cooperation and engagement across the region in Africa because that’s a key part of making the access possible to begin with. I can see Ambassador has immediate thoughts, so I guess we can start with you. Let’s start with you since you are very expressive. with you and then go from Dr. Chinasa coming up to the rest of the panel.

Ambassador Philip Tigo

Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point. Because AI is not ICT. It’s not about who’s going to build the best data centers. You know, who’s going to do X or Y. This is a collective all -in effort. I think for me, that’s the biggest shift that we need to make. That it’s not about competition. It’s about cooperation and collaboration. That’s what will make us work together. I think for me, that’s my and I’m saying this out of frustration because I see it. And it’s a waste of money. But also, it’s just a waste.

Dr. Chinasa Okolo

Alrighty. So, I know in the draft of this I mentioned, I’ll talk about some of the stuff at the UN. I’m speaking also my personal capacity too. But, you know, we just recently launched the international conference. scientific panel on AI I read nearly every application for that and so I think really it’s important that and I was very happy to see African representation you know on the panel we have eight I believe and I was thinking we were gonna at max not a max but at least get around four or five or so and so it’s really good to see that you know our voices are valued and then also more broadly that there are other efforts to complement the panel including the Africa AI Council and so I also look forward to seeing how this plays into the work that the UN is doing and also again some of the other enough initiatives that we’re doing around the global AI dialogues which play directly into the the panel’s work as well and so really just again that’s having you know in not to say that you know just this inclusion will actually lead to actual change sometimes it you know honestly doesn’t but I think the UN is a little bit special and in some cases where we’ve seen how the work that was done with the H -Lab on AI really led to increased conversations and discourse on this idea of international AI cooperation.

And so I hope to see African governments do this kind of work individually. I had the chance to serve on the AU’s Continental AI Strategy. I did this work when I was a PhD student, like four years ago, and then also served as a drafting member on the Nigeria National AI Strategy as well. And so I did this all the way from the US, and I think that there’s many opportunities for, again, African countries and also those throughout the global majority to build their own initiatives for this

Professor Jonathan Shock

AI cooperation. Yeah, I’d like to sort of follow up on, in particular, Ambassador Tigo’s point about the need to not be competing with each other. And I think that within Africa, I think there are already really, really good examples of people working together. You’ve got Masa Kani, you’ve got the Deep Learning in Daba, you’ve got GOAI Africa, you’ve got Sasanke Biotic. You’ve got the… I think that’s a good point. all of these grassroots organizations who already with limited resources doing amazing work you then add some resources to this and you really superpower what people can do um at the university of cape town the african compute initiative was announced today um and so the idea of this is that we happen to have a cluster an hpc a high performance computer center currently with a lot of capacity that is to say a lot of space we are building that we’re setting up uh an african compute initiative which which researchers around africa are going to be able to use we’re setting up a cloud platform we’re bringing in gpus um state -of -the -art compute that’s going to allow other people at other universities to do their research this is not a competition this is really about how does one set of people empower another set of people because you know there is no competing with you know a trillion dollar company you but actually what we have is a network effect and that’s really really powerful in and of itself so we need to be working with academia with civil society, with government with the private sector all of these groupings need to work together and

Michelle Malonza

Alright so I’ll do the final question before we get into the Q &A I think you’ve all touched upon how you think the engagement and the policy should be working around the continent like moving from strategies to the policy and so if Africa is able to come up with their own systems or find a way to have leverage against the companies to localize the systems that they’re going to deploy on the continent, what considerations do you think should be made while deploying those specific systems into our critical infrastructure because that somehow seems like an inevitability that’s going to happen so what considerations should African governments be making when thinking about integrating AI into critical infrastructure?

I can start with Mark since he’s the one who didn’t answer in the last round of questions the proxy paper for staying silent.

Mark Gaffley

for the problem we’re trying to solve for. Sorry, John. So, yeah, just to ask if it is actually necessary. And the other thing, just, you know, sort of, you know, recognising access and inclusion issues is just to keep the alternatives open. So if you are going to digitise something or, you know, use AI tools to solve for a particular problem, just make sure that those who can’t access them still have their kind of analogue approaches to doing things. I did mention to someone earlier I was the against tech person in the room, so I think that’s why I’m pushing the analogue way.

Professor Jonathan Shock

Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of move fast and break things. If you try to take a system, some sort of infrastructure system, be it, you know, a government department, and try to AI -ify it, you know, there are massive, massive risks there. That’s not to say that… that we shouldn’t be thinking about this and doing it very carefully. But we have to understand, again, I go back to agency, the agency that we remove when we get an AI system to make the decisions for us. I think there are really good ways to do this with human in the loop where we can have transparent systems so we can understand what the decision -making process is.

But if we’re simply going to a company who sell a product, who say we can streamline your service, then we’re really beholden to that company. And if it turns out that that’s not the right solution, trying to undo that when you’ve then lost the skills, then you’re in a really difficult position. So I think we need to move at a reasonable pace but not break too many things along the way. I think that’s a real risk.

Ambassador Philip Tigo

Well, I think I probably have an advantage because I’m in government, so we kind of face a lot of these things. I think partly to understand the challenge, right? The challenge, I think, remember is, that we haven’t, especially for the African continent, age 19 .7, very young, already engaging in these tools, government engaging in 19th century technology, and so there’s a gap. And so there’s already sufficient pressure for governments to engage in these new tools. So there’s really not much room to kind of make these rational choices of not to use these new technologies because you have a population that is already using it. So then what does that leave you as options? I think the options for me, then it means that you need to start creating some form of guardrails even before you acquire the tools.

So you have procurement is one tool. And we can write a lot of these rules in the procurement documents, and I don’t think many of us are doing that. Include safety benchmarks in that, include a lot of these guys don’t want to be audited, so just get that in there because they want your business. And I have a sense maybe that’s the sweet spot, the point of decision marketing. At that time, everybody wants to talk to you, and that’s where African countries lose. lose the game. The second part of course is that because the technology changes very quickly I have a sense what we need to do is then continuously have kind of these agile mechanisms that keep pushing the foundational questions because this is not one technology it’s not a laptop that you’re going to buy and you’re going to use it for three years.

It’s going to change in the next two, three months. So I think potentially we need that. Third I think is just this contingency planning this single sourcing business should not work we need options and for me the fourth option is consideration always have the local option open because I mean data localization sovereignty. It’s about sovereignty so I think and part of it we don’t do that and that’s where we also start to make strategic decisions of separating private sector from global big tech to local private sector companies to smaller medium enterprises and I think we need to do that deliberately because then at least the local companies can be kind of managed by domestic law.

These other ones you probably have to go to Silicon Valley to sort of litigate. So I think for me, and it will keep on evolving, these are things that I’m seeing right now as potential options. But then I think it still all boils down to the capacity of the decision maker or the policy maker to be able to disarm these insights. Where we lose is negotiations. And part of what my team continuously does, and maybe this is something you guys need to consider, is think about these playbooks, guidebooks, negotiation tools, so that when they are negotiating, at least they have some sense of knowledge as their power to engage. Then I’m not talking because you can’t, you know, the hundred billion, five trillion, maybe when you have knowledge and have market insights, you have a better, you’re actually in a better position to engage.

Negotiate.

Dr. Chinasa Okolo

Yeah, so I definitely agree with my co -panelists on a lot of the topics brought up. I would say for the first one, particularly around the need for AI as an actual solution, and governments really need to evaluate whether, again, simple solutions, non -AI or deep learning based are actually necessary. And then also around the need for guidelines on procurement. I’ve been doing some work with the World Bank, and, you know, we’ve seen in our work that a lot of African governments, those across the majority of regions, are really being bombarded by, you know, suppliers to basically buy solutions. A lot of them, I think, are honestly unnecessary, and a lot of governments don’t have the capacity to evaluate these and make decisions, let’s say, transparently in -house.

And I think the key part of actually building the capacity will be, you know, establishing AI safety institutes or, you know, whatever name. I think that’s what governments want to call them. And I think that, you know, we have… this within the United States, it’s embedded within the National Institute of Standards and Technology, and they test more than technology. It’s food, you know, lotions, you know, cosmetics, all that stuff, too. And this may not obviously look the same across Africa, across Southeast Asia, South Asia, et cetera, but it really needs to be done, again, just to have this independent capacity and also, again, not be reliant on these multilateral lenders and foreign organizations or even philanthropic organizations that may be, again, funding or providing solutions, again, that may not be aligned with African needs and values and also or maybe not even be necessary in the first place.

Michelle Malonza

Thank you so much for your responses. They were very thoughtful as we think about, to figure out what we don’t want is to think about what specifically African countries think is risky and thinking about the short term as the priority and then thinking about what we want is based on thinking. And I think that’s what we’re talking about, our capacity to make that decision or to… autonomy to decide what we want and then localizing in that context. And then in terms of thinking about how to collaborate across the board, the sense that I’m getting across the panel generally is that we need to think against competition so that we can be able to have leverage against the big companies.

So thank you so much for your details and thoughtful responses. I’ll hand it over to Zach to get us into the Q &A session.

Speaker 2

Okay, thank you. So we’re going to take a few questions. And maybe I’ll also take one question from the audience, one or two questions from the audience. So one of the questions here is kind of like broad, so maybe Prashok, I’ll hand it over to you in 30 seconds. He said to improve inclusivity and trust, what shall an ideal AI model optimize for?

Professor Jonathan Shock

Gosh, that’s a difficult question. I think part of it has to be about transparency. How is a decision being made? People talk about the sort of the black box problem of AI systems. In fact, this isn’t quite the right way to look at these systems. You can look exactly what’s happening inside the model. You can look at all the weights of the matrices, but it’s really difficult to tell what’s actually happening in there. So building transparent systems that are understandable, I think that’s one way to build trust. Yeah, I think that’s a way to think about it.

Speaker 2

Okay, thank you for that. There is one question here also about what are the most significant misconceptions about the current state of AI? Maybe Dr. Chinaza.

Dr. Chinasa Okolo

I’ll probably be redundant, you know, from some of the earlier topics we discussed on the panel. But again, that is a panacea or a band -aid or a solution for a lot of things, particularly like development challenges. I think we see African – particularly like doubling down on again adopting procuring these AI solutions when honestly like building hospitals paying teachers installing sustaining or reliable electrical grids would actually solve a would solve the problems much easier and better maybe not easier but better but and also with a little opportunity for you know funds being diverted or wasted on a non -functional solution so that’s one thing I think my other panelists would probably have other good comments as well

Speaker 2

all right is there any question from the audience maybe we can take one question okay I will take one one but very brief

Audience

first of all thank you for being digitally inclusive for those of us who couldn’t use the QR code my question is to professor shock so you talked about misinformation information and disinformation maybe I can work my way back a little bit. So I think in some ways we need to start talking about kind of disincentivizing some types of AI, and this is what I mean. Usually when we talk about disinformation, we think about it from the user’s perspective, right? But if you create a tool, for example, for one, for example, I don’t see why there’s a sort of massification of the use of AI tools for media creation. Like, it’s not very necessary. Like, there’s a running joke about someone saying, well, I was hoping AI would be created to do some of the hard work that I do at home, like laundering or housekeeping, so I have more time to actually do media and entertainment, but it reverses the case, right?

So we’re having AI do all of this sort of stuff, and we’re not really making progress on robotics and stuff like that. Now, my question is, well, relatively compared to LLMs, right? So my question is, should we have some sort… say, mandatory watermark? for example for AI generated media like in that case if I see some video or some songs or some pictures I know it’s AI generated and in some ways I’m naturally not inclined to believe it is that a workable solution?

Professor Jonathan Shock

I think the cat is out of the bag I think it’s great if some organizations do put watermarks on indeed within China within some of the other companies they are beginning to do that but because we now have open source models and the open source models are getting very very good if a malicious actor wants to set out a disinformation campaign they’re just going to choose the one that doesn’t have the watermarks I see that one could for instance have media where there is some requirements to have information about whether or not it’s come from an AI system but when there are choices to have watermarked output or not watermarked output the malicious actor is just going to going to choose the one which is going to subvert the system.

So I think that it may be a stopgap, but I think it’s a very short one.

Speaker 2

Okay, thank you. In 20 seconds.

Audience

So this is to the panelists. So I would say we have about 64 % of the continent of Africa that don’t have access to the internet and so are digitally excluded. So my question is how do we make sure that our advancements with AI are not widening the digital divide? I think it’s a really big problem. As we’re moving forward with AI, there are people who don’t have access to the internet, electricity, and other things. So how do we ensure that we’re also thinking about those digitally excluded individuals? Thank you.

Mark Gaffley

This is a very abstract response, but it’s something I’ve been working on, so I’ll float it here. But it’s this idea of the digitally excluded as the kind of last vestiges of creativity left on the planet. you play it out over time, those who don’t have access to what I said about mental arrest and cognitive decline etc, being the ones that we eventually come to to ask for the sort of creative ideas and the independent decisions, so decision making abilities. So in a way just to kind of flip that, perhaps this focus on not having access as being excluded as potentially being a way down the line that you are actually included and in fact relied on because you kind of kept your cognitive abilities intact.

So yeah, a bit out there but I thought I floated.

Ambassador Philip Tigo

in that particular instance and this is where I think AI becomes interesting I think and part of what I always speak about is the unfinished business that African governments need to do. So it’s about connectivity, it’s about electricity it’s about literacy it’s about the kind of old infrastructures that we’ve not done. So I think for the African continent this is where you start to use AI to optimise development. You can do smart it’s AI accelerates AI. And if you look at what we’re doing in terms of Kenya at least, is that what we’re doing. For example we’ve realized that with artificial intelligence that a lot of our energy optimization was wrong with our artificial intelligence because we were going for last mile electricity connectivity.

But now with AI we’re realizing in the World Bank that you could do this a little bit differently. All I’m saying is that we can leverage this technology on those non -sensitive capabilities to actually accelerate development so that again it’s not AI for AI. So for African governments don’t get AI for chat, right? Get AI for something else that drives development.

Speaker 2

Alright, thank you. So we only have one minute for questions so I will take the last two questions together and briefly our panelists will answer to that. So one question here one question there.

Audience

My question is a little philosophical one. Like we talked about how right now AI is in a war where very many new technology comes. Each country and each company is trying to to be capitalistic and try to one up the other one. Uniquely in AI though, AI might just be the one which might catch up with itself where they might just like, there’s a possibility, right? So there are so many economic and structures out there like socialism, capitalism, which unique focus on optimizing certain things like engagement on social media, for example. So if you had to ideally work on a structure, if AI had to decide on a structure for humanity, I would just like your opinions on that.

Speaker 2

Okay, thank you. We’ll take one question here.

Audience

Yeah, okay, thank you. So I’m going to consider two things, which is policy and our generation at large. So I wanted to ask, considering the zeal that we have for knowing AI, is the next generation safer also? And considering the thing that you’re saying, policy, we need policy. Should we, go around or just, just say we need policy because we can catch AI at where it is actually now in Africa, considering it hasn’t gone abroad that much, and just put policy to who is going to learn this and who is going to know this on AI.

Speaker 2

Okay, thank you. So I think these two questions will be split across our panelists, so who wants to go first?

Dr. Chinasa Okolo

All righty. Yeah, I’ll take the policy one. I think that I’m very hopeful for African governments in particular when it comes to AI policy. I think there is, let’s say, like a big learning curve or actually implementation curve from the 20 or so strategies and two draft policy frameworks. And there is an opportunity, you know, for the younger generation to be involved. Obviously, one where I think is providing like feedback on different strategies, a couple of countries have had open feedback period periods. A lot of most of them haven’t, unfortunately. But, you know, despite that, I think, you know, doing research, legal analysis and providing these findings openly can actually have a lot of change. Again, if there happen to be formal mechanisms to provide this feedback, obviously take advantage of them.

If not, you know, create your own avenues or pathways to do so. And then I can, I’ll let my panelists speak. Okay.

Speaker 2

Mark, do you want to add something? All right. Pro. Okay.

Mark Gaffley

Very briefly. Okay. Well, that would be my point, is I think if AI were to structure humanity, we’d be very efficient and we’d keep to time.

Speaker 2

All right. Thank you so much for your contribution. We’ll hand it over to Iman so that she can. Thank

Speaker 1

Thank you so much. I’ll be super brief. Well, I’ll first start by thanking our incredible panel. Thanks a lot for your insights and energy and time. Thanks to you all for coming. It’s been a long few days, I imagine, being here at the conference. There are such great people to talk to and learn from. Before we wrap up, we’d love to take a picture with the panel. So I’ll invite you to just step forward here so that we can grab a picture together. And as they do that, for everyone, we have a social happening at 7 .30 today at Cafe Lota. That is in a museum close by. You could just, like, Google it. And we’d love to see you there.

We’re going to be heading there at 7 .30. Thanks, guys. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ambassador Philip Tigo
7 arguments175 words per minute1976 words674 seconds
Argument 1
AI systems creating dependency rather than building capacity represents digital neocolonialism
EXPLANATION
Ambassador Tigo argues that AI systems become problematic when they create dependency rather than building local capacity or capability, leading to erosion of human agency. He views this as a form of digital neocolonialism where African institutions become mere implementers or users while value is concentrated outside the continent.
EVIDENCE
He mentions that if AI systems are extractors of African data, capturing African markets with concentration of value outside the continent while leaving institutions as mere implementers or users, this constitutes digital neocolonialism.
MAJOR DISCUSSION POINT
Defining Safe and Trusted AI for Africa
AGREED WITH
Professor Jonathan Shock
Argument 2
AI built without African knowledge, wisdom, and cultures creates existential threats
EXPLANATION
Ambassador Tigo contends that when AI systems continue to be built without incorporating African knowledge, wisdom, and cultures, they create an existential threat. He describes this as almost a civilization extinction story that goes beyond being undesirable to being unacceptable.
MAJOR DISCUSSION POINT
Defining Safe and Trusted AI for Africa
AGREED WITH
Professor Jonathan Shock
Argument 3
African scientists need access to AI models to evaluate systems that impact their communities
EXPLANATION
Ambassador Tigo argues that African scientists need access to AI models because they cannot discuss benchmarks and evaluations around safety without access to these models, especially since Africans bear the brunt of these models’ impacts. He emphasizes that governments need capacity to hold companies accountable for potential adverse harms to society and community.
EVIDENCE
He provides the example that Kenya is the biggest user of ChatGPT and the first use case is emotional advice, meaning people are asking a model for emotional advice that doesn’t understand their context.
MAJOR DISCUSSION POINT
African Participation and Agency in AI Development
Argument 4
Building local AI models is essential to avoid over-reliance on foreign systems
EXPLANATION
Ambassador Tigo argues that part of agency includes human agency, but also avoiding over-reliance on external models. He believes the continent needs the capability to build its own models that are nuanced to local context, providing alternatives to relying solely on major foreign AI companies.
EVIDENCE
He mentions that currently there are limited choices with only Gemini, Quen, OpenAI, Anthropic and a few others, questioning what alternatives Africa has if they don’t build their own models from open source.
MAJOR DISCUSSION POINT
African Participation and Agency in AI Development
Argument 5
Competition between African countries on AI is counterproductive – cooperation is essential
EXPLANATION
Ambassador Tigo strongly argues that AI development should not be about competition between African countries, but rather about cooperation and collaboration. He emphasizes that AI is not like ICT infrastructure where countries compete on who builds the best data centers, but requires a collective all-in effort.
EVIDENCE
He expresses frustration at seeing wasteful competition and emphasizes this as the biggest shift needed for African countries to work together effectively.
MAJOR DISCUSSION POINT
Collaboration and Capacity Building Across Africa
AGREED WITH
Professor Jonathan Shock
Argument 6
Governments should establish guardrails and safety benchmarks in procurement processes
EXPLANATION
Ambassador Tigo argues that since young African populations are already engaging with AI tools while governments lag behind, there’s pressure to adopt new technologies. He suggests that governments should create guardrails before acquiring tools, particularly through procurement processes that include safety benchmarks and audit requirements.
EVIDENCE
He notes that many companies don’t want to be audited, but they want government business, making the point of procurement decision a strategic opportunity where governments have leverage.
MAJOR DISCUSSION POINT
AI Integration into Critical Infrastructure
Argument 7
AI should be used to optimize development and accelerate infrastructure projects
EXPLANATION
Ambassador Tigo argues that for the African continent, AI should be leveraged to optimize development and address unfinished business like connectivity, electricity, and literacy. He advocates for using AI to accelerate development rather than using AI for its own sake.
EVIDENCE
He provides the example of Kenya using AI with the World Bank to realize that their energy optimization approach for last-mile electricity connectivity was wrong, and AI helped them understand how to do it differently.
MAJOR DISCUSSION POINT
Addressing Digital Exclusion and Development Priorities
P
Professor Jonathan Shock
6 arguments184 words per minute1458 words473 seconds
Argument 1
Misinformation and disinformation campaigns targeting elections and female politicians pose immediate risks
EXPLANATION
Professor Shock argues that misinformation and disinformation represent immediate threats that are already causing disruptions, with real risk of complete breakdown in trust. He distinguishes between misinformation (spreading incorrect information unknowingly) and disinformation (targeted campaigns), noting that these campaigns are often gendered and target female politicians.
EVIDENCE
He cites examples of technology-facilitated gender-based violence against politicians during election periods in Ghana, South Africa, and Nigeria, and mentions how social media has already created echo chambers that AI is now allowing to happen at scale.
MAJOR DISCUSSION POINT
Defining Safe and Trusted AI for Africa
Argument 2
Individual malicious actors can now design agents for disinformation campaigns at scale
EXPLANATION
Professor Shock warns that over the last few months, the possibility has emerged for single malicious actors to design their own agents to carry out misinformation or disinformation campaigns. He emphasizes this is no longer just about big tech firms, but about individual actors being able to produce software that can reach millions.
MAJOR DISCUSSION POINT
Defining Safe and Trusted AI for Africa
Argument 3
AI should provide empowerment and agency within local contexts and languages
EXPLANATION
Professor Shock argues that what people want is empowerment and agency, and AI has the possibility to give people agency to understand their possibilities and increase their range of choices. However, he emphasizes that this requires AI to understand local context and contextual language information.
EVIDENCE
He notes the lack of local context within current models and the absence of contextual language information, stating that until these issues are fixed, AI won’t actually empower people.
MAJOR DISCUSSION POINT
African Participation and Agency in AI Development
AGREED WITH
Ambassador Philip Tigo
Argument 4
Existing grassroots organizations like Masakhane and Deep Learning Indaba provide strong foundations
EXPLANATION
Professor Shock highlights that within Africa, there are already excellent examples of people working together through grassroots organizations. He argues that these organizations are already doing amazing work with limited resources, and adding resources to them would superpower what they can accomplish.
EVIDENCE
He specifically mentions Masakhane, Deep Learning Indaba, GOAI Africa, and Sasanke Biotic as examples of these grassroots organizations.
MAJOR DISCUSSION POINT
Collaboration and Capacity Building Across Africa
AGREED WITH
Ambassador Philip Tigo
Argument 5
The African Compute Initiative will provide shared computational resources across universities
EXPLANATION
Professor Shock announces that the University of Cape Town is launching the African Compute Initiative, which will allow researchers around Africa to use their high-performance computer center. The initiative involves setting up a cloud platform and bringing in state-of-the-art GPUs to enable other universities to conduct their research.
EVIDENCE
He mentions that UCT has a high-performance computer center with significant capacity and space, and they are building this as a shared resource rather than competing with trillion-dollar companies.
MAJOR DISCUSSION POINT
Collaboration and Capacity Building Across Africa
Argument 6
Transparent, human-in-the-loop systems are preferable to black box solutions
EXPLANATION
Professor Shock warns against the Silicon Valley approach of ‘move fast and break things’ when implementing AI in infrastructure systems. He advocates for transparent systems with human-in-the-loop decision-making where people can understand the decision-making process, rather than simply adopting products from companies that promise to streamline services.
EVIDENCE
He explains that while you can examine the weights and matrices in AI models, it’s difficult to understand what’s actually happening, making transparency crucial for building trust.
MAJOR DISCUSSION POINT
AI Integration into Critical Infrastructure
D
Dr. Chinasa Okolo
6 arguments178 words per minute1638 words551 seconds
Argument 1
Current AI incident databases lack comprehensive coverage of African contexts and harms
EXPLANATION
Dr. Okolo points out that when searching for Africa in current AI incident databases, results revert to African American contexts, making it difficult to find information about AI harms on the African continent. She emphasizes the importance of documenting these incidents so governments can craft appropriate regulations and hold responsible parties accountable.
EVIDENCE
She mentions cases from Nigerian and South African universities where AI was used to automatically grade standardized exams, causing issues for students trying to dispute their scores, but these incidents didn’t make mainstream news.
MAJOR DISCUSSION POINT
Defining Safe and Trusted AI for Africa
Argument 2
Africans want opportunities to contribute equitably to AI governance structures and development
EXPLANATION
Dr. Okolo argues that young Africans want opportunities to participate equitably in AI development and governance structures. She sees this as addressing both the epidemic of underemployment on the continent and the desire to contribute to systems that have the power to change the world.
EVIDENCE
She notes that AI safety and fairness conferences often focus on race as a Western construct, while African researchers could contribute new research on how AI impacts people from different castes, tribes, religions, and genders.
MAJOR DISCUSSION POINT
African Participation and Agency in AI Development
AGREED WITH
Mark Gaffley, Michelle Malonza
Argument 3
International panels and councils provide platforms for African voices in global AI governance
EXPLANATION
Dr. Okolo highlights the importance of African representation in international AI governance bodies. She expresses satisfaction with African representation on the UN’s international scientific panel on AI and mentions complementary efforts like the Africa AI Council.
EVIDENCE
She mentions reading nearly every application for the UN panel and being pleased to see eight African representatives, more than the expected four or five, and references her own service on the AU’s Continental AI Strategy and Nigeria’s National AI Strategy.
MAJOR DISCUSSION POINT
Collaboration and Capacity Building Across Africa
Argument 4
AI safety institutes should be established to provide independent evaluation capacity
EXPLANATION
Dr. Okolo argues that governments need to establish AI safety institutes or similar bodies to build independent capacity for evaluating AI systems. She emphasizes this would help governments avoid being bombarded by suppliers and make transparent, in-house decisions about AI procurement.
EVIDENCE
She references the US model where AI safety evaluation is embedded within the National Institute of Standards and Technology, which tests various technologies beyond just AI, including food, cosmetics, and other products.
MAJOR DISCUSSION POINT
AI Integration into Critical Infrastructure
Argument 5
Simple non-AI solutions may be more appropriate than complex AI systems for many problems
EXPLANATION
Dr. Okolo argues that AI is often misconceived as a panacea for development challenges, when simpler solutions like building hospitals, paying teachers, or installing reliable electrical grids would actually solve problems better. She warns against governments doubling down on AI solutions when basic infrastructure needs remain unmet.
EVIDENCE
She mentions work with the World Bank showing that African governments are being bombarded by suppliers to buy AI solutions, many of which are unnecessary, and governments lack capacity to evaluate these transparently.
MAJOR DISCUSSION POINT
Addressing Digital Exclusion and Development Priorities
AGREED WITH
Mark Gaffley
Argument 6
Young Africans should engage in policy feedback and research to influence AI governance
EXPLANATION
Dr. Okolo encourages the younger generation to get involved in AI policy by providing feedback on strategies, conducting research, and creating their own pathways for input when formal mechanisms don’t exist. She expresses hope for African governments’ AI policy development despite implementation challenges.
EVIDENCE
She notes that while some countries have had open feedback periods for their AI strategies, most haven’t, but suggests that doing research and legal analysis and sharing findings openly can still create change.
MAJOR DISCUSSION POINT
Addressing Digital Exclusion and Development Priorities
M
Mark Gaffley
5 arguments166 words per minute788 words284 seconds
Argument 1
75% of South Africans know very little about AI, learning through informal channels
EXPLANATION
Mark Gaffley reports findings from a public awareness survey showing that nearly 75% of respondents knew very little about AI. For those who did know about AI, most learned through informal and unstructured channels like social media and television rather than formal education.
EVIDENCE
The data comes from a module in the annual South African Social Attitudes Survey, which is nationalized, released in September of the previous year.
MAJOR DISCUSSION POINT
African Participation and Agency in AI Development
Argument 2
Educational programs and MOOCs can equip Africans to make informed decisions about AI
EXPLANATION
Mark Gaffley describes how educational initiatives can help move the needle on AI awareness and equip people to make informed decisions about AI technologies. He emphasizes the importance of creating awareness so that when people interact with AI, they can make meaningful choices about what they want from these systems.
EVIDENCE
He mentions GCG’s short courses on ethical and human rights implications of AI through accredited universities, which attract thousands of applications globally, and their Women in Focus scholarship series prioritizing African women. They also have an online MOOC launching with relatable caricatures and imagery.
MAJOR DISCUSSION POINT
Collaboration and Capacity Building Across Africa
AGREED WITH
Dr. Chinasa Okolo, Michelle Malonza
Argument 3
The necessity of AI solutions should be evaluated before implementation
EXPLANATION
Mark Gaffley advocates for asking whether AI is actually necessary for solving the problem at hand before implementing AI systems. He emphasizes the importance of keeping alternatives open and maintaining analog approaches for those who cannot access digital solutions.
MAJOR DISCUSSION POINT
AI Integration into Critical Infrastructure
AGREED WITH
Dr. Chinasa Okolo
Argument 4
Analog alternatives should remain available for those who cannot access digital systems
EXPLANATION
Mark Gaffley argues that when digitizing services or implementing AI tools, it’s important to maintain analog approaches for people who cannot access the digital systems. He acknowledges his role as somewhat of a contrarian voice advocating for non-technological solutions.
MAJOR DISCUSSION POINT
AI Integration into Critical Infrastructure
Argument 5
The digitally excluded may retain cognitive abilities that become valuable as AI advances
EXPLANATION
Mark Gaffley presents a philosophical perspective that those currently digitally excluded might eventually become the last vestiges of creativity and independent decision-making abilities on the planet. He suggests that over time, those without access to AI might be relied upon for creative ideas and cognitive abilities that others may lose.
MAJOR DISCUSSION POINT
Addressing Digital Exclusion and Development Priorities
S
Speaker 1
1 argument103 words per minute627 words362 seconds
Argument 1
African-led organizations are building local capacity for AI safety, governance, and evaluations research
EXPLANATION
Speaker 1 introduces the panel by highlighting that their organizations represent a growing ecosystem of African-led efforts focused on AI governance, safety, and capacity building. This includes AI Safety South Africa working on building local capacity for AI safety alongside evaluations research.
EVIDENCE
Mentions specific organizations including AI Safety South Africa and their work on building local capacity for AI safety alongside evaluations research
MAJOR DISCUSSION POINT
Introduction and Context Setting
S
Speaker 2
2 arguments149 words per minute527 words210 seconds
Argument 1
Safe and Trusted AI should be broadly considered as AI that delivers the outcomes we want
EXPLANATION
Speaker 2 provides a foundational definition for the panel discussion, framing Safe and Trusted AI as systems that deliver desired outcomes. This sets the framework for exploring what constitutes undesirable AI-driven outcomes in the African context.
MAJOR DISCUSSION POINT
Defining Safe and Trusted AI for Africa
Argument 2
Africa needs mechanisms to monitor and mitigate AI risks given that frontier AI development occurs outside the continent
EXPLANATION
Speaker 2 raises the critical question of how Africa can monitor and mitigate AI risks when most frontier AI development happens outside of African contexts. This highlights the challenge of governance and oversight when the technology is developed elsewhere but impacts African communities.
MAJOR DISCUSSION POINT
Defining Safe and Trusted AI for Africa
M
Michelle Malonza
4 arguments218 words per minute600 words164 seconds
Argument 1
Africans need to understand AI technology before they can define what they want from AI systems
EXPLANATION
Michelle Malonza emphasizes that in order for Africans to know what they want from AI systems, they first need to understand what AI technologies exist and what specific technology is being discussed when referring to AI. This builds on the point about public awareness being a prerequisite for meaningful participation in AI governance.
EVIDENCE
References Mark’s findings about 75% of South Africans knowing very little about AI
MAJOR DISCUSSION POINT
African Participation and Agency in AI Development
AGREED WITH
Dr. Chinasa Okolo, Mark Gaffley
Argument 2
Capacity and access are essential pathways to achieving agency and empowerment in AI
EXPLANATION
Michelle Malonza synthesizes the panel discussion by identifying capacity building and access as the fundamental mechanisms through which Africans can achieve agency and empowerment in AI systems. She connects these concepts as prerequisites for meaningful participation in AI governance and development.
EVIDENCE
Draws from panelist responses about the importance of capacity building and access
MAJOR DISCUSSION POINT
African Participation and Agency in AI Development
Argument 3
African countries should collaborate rather than compete to gain leverage against big tech companies
EXPLANATION
Michelle Malonza summarizes a key theme from the panel discussion that African countries need to work together collaboratively rather than competing with each other in order to have sufficient leverage when dealing with large technology companies. This cooperation is seen as essential for effective AI governance.
EVIDENCE
Synthesizes points made by panelists about the need for cooperation over competition
MAJOR DISCUSSION POINT
Collaboration and Capacity Building Across Africa
Argument 4
AI integration into critical infrastructure appears inevitable and requires careful consideration
EXPLANATION
Michelle Malonza frames the integration of AI into critical infrastructure as an inevitability that African governments need to prepare for. She emphasizes the need for careful consideration of what factors should guide this integration process to ensure it serves African interests and values.
MAJOR DISCUSSION POINT
AI Integration into Critical Infrastructure
A
Audience
4 arguments170 words per minute574 words202 seconds
Argument 1
Mandatory watermarks on AI-generated media could help combat disinformation
EXPLANATION
An audience member suggests that requiring mandatory watermarks on AI-generated media (videos, songs, pictures) could help people identify AI-generated content and naturally be less inclined to believe it. This is proposed as a potential solution to the disinformation problem discussed by the panel.
EVIDENCE
References the concern about AI tools being used for media creation rather than more practical applications like household tasks
MAJOR DISCUSSION POINT
Addressing Misinformation and Disinformation
Argument 2
AI advancement risks widening the digital divide for the 64% of Africans without internet access
EXPLANATION
An audience member raises concern that as AI advances, it may further exclude the significant portion of Africa’s population that lacks basic digital access. They question how to ensure AI progress doesn’t leave behind those without internet, electricity, and other basic digital infrastructure.
EVIDENCE
Cites that 64% of the African continent lacks internet access and is digitally excluded
MAJOR DISCUSSION POINT
Addressing Digital Exclusion and Development Priorities
Argument 3
AI might determine optimal economic and social structures for humanity
EXPLANATION
An audience member poses a philosophical question about whether AI systems, as they become more advanced, might be able to determine the optimal economic and social structures for humanity. They suggest AI could potentially design better systems than current approaches like capitalism or socialism.
EVIDENCE
References how current systems optimize for specific things like engagement on social media
MAJOR DISCUSSION POINT
Philosophical Implications of AI Development
Argument 4
Policy interventions should target AI while it’s still emerging in Africa
EXPLANATION
An audience member suggests that African governments should implement AI policies now while the technology hasn’t yet become widespread on the continent. They propose that this timing presents an opportunity to regulate who learns about AI and how it’s implemented before it becomes more entrenched.
MAJOR DISCUSSION POINT
Policy Timing and Implementation
Agreements
Agreement Points
AI systems should empower people and provide agency rather than create dependency
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock
AI systems creating dependency rather than building capacity represents digital neocolonialism AI should provide empowerment and agency within local contexts and languages
Both speakers agree that AI should enhance human agency and empowerment rather than creating dependency. Ambassador Tigo frames dependency as digital neocolonialism, while Professor Shock emphasizes that AI should give people agency to understand possibilities and make choices within their local contexts.
Local context, knowledge, and cultural understanding are essential for AI systems
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock
AI built without African knowledge, wisdom, and cultures creates existential threats AI should provide empowerment and agency within local contexts and languages
Both speakers emphasize the critical importance of incorporating local context, knowledge, and cultural understanding into AI systems. They argue that without this local grounding, AI systems cannot effectively serve African communities and may even pose threats.
Collaboration and cooperation among African countries is essential, not competition
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock
Competition between African countries on AI is counterproductive – cooperation is essential Existing grassroots organizations like Masakhane and Deep Learning Indaba provide strong foundations
Both speakers strongly advocate for collaborative approaches among African countries and organizations rather than competitive ones. Ambassador Tigo explicitly calls for an end to wasteful competition, while Professor Shock highlights existing successful collaborative initiatives that should be built upon.
Simple, non-AI solutions may be more appropriate than complex AI systems for many problems
Speakers: Dr. Chinasa Okolo, Mark Gaffley
Simple non-AI solutions may be more appropriate than complex AI systems for many problems The necessity of AI solutions should be evaluated before implementation
Both speakers advocate for careful evaluation of whether AI is actually necessary before implementation. Dr. Okolo argues that basic infrastructure like hospitals and electrical grids often solve problems better than AI, while Mark Gaffley emphasizes asking whether AI is actually necessary for the problem at hand.
Capacity building and education are fundamental prerequisites for meaningful AI participation
Speakers: Dr. Chinasa Okolo, Mark Gaffley, Michelle Malonza
Africans want opportunities to contribute equitably to AI governance structures and development Educational programs and MOOCs can equip Africans to make informed decisions about AI Africans need to understand AI technology before they can define what they want from AI systems
All three speakers agree that education and capacity building are essential for meaningful participation in AI governance and development. They emphasize that people need to understand AI technology before they can make informed decisions about what they want from these systems.
Similar Viewpoints
Both speakers emphasize the need for African institutions to have independent capacity to evaluate AI systems. Ambassador Tigo focuses on scientists needing access to models for evaluation, while Dr. Okolo advocates for establishing dedicated AI safety institutes to provide this capacity.
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
African scientists need access to AI models to evaluate systems that impact their communities AI safety institutes should be established to provide independent evaluation capacity
Both speakers recognize the importance of documenting and addressing AI-related harms, particularly those affecting African contexts. Professor Shock focuses on immediate risks from misinformation campaigns, while Dr. Okolo emphasizes the need for better documentation of AI incidents affecting Africa.
Speakers: Professor Jonathan Shock, Dr. Chinasa Okolo
Misinformation and disinformation campaigns targeting elections and female politicians pose immediate risks Current AI incident databases lack comprehensive coverage of African contexts and harms
Both speakers advocate for institutional mechanisms to ensure responsible AI procurement and deployment by governments. They emphasize the need for governments to have capacity and frameworks to evaluate AI systems before adoption.
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Governments should establish guardrails and safety benchmarks in procurement processes AI safety institutes should be established to provide independent evaluation capacity
Unexpected Consensus
The immediate priority of addressing basic infrastructure needs over advanced AI deployment
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo, Mark Gaffley
AI should be used to optimize development and accelerate infrastructure projects Simple non-AI solutions may be more appropriate than complex AI systems for many problems The necessity of AI solutions should be evaluated before implementation
Despite being at an AI safety conference, there was unexpected consensus that basic infrastructure needs (electricity, hospitals, education) should often take priority over AI deployment. This pragmatic approach suggests a mature understanding that AI should serve development goals rather than be pursued for its own sake.
The potential value of digital exclusion in preserving human cognitive abilities
Speakers: Mark Gaffley, Professor Jonathan Shock
The digitally excluded may retain cognitive abilities that become valuable as AI advances Transparent, human-in-the-loop systems are preferable to black box solutions
There was unexpected philosophical consensus that maintaining human agency and cognitive abilities is valuable, even if it means being less digitally integrated. This represents a counter-narrative to typical digital inclusion discussions, suggesting that some forms of exclusion might preserve important human capacities.
Overall Assessment

The speakers demonstrated strong consensus on several key principles: the need for African agency and empowerment in AI development, the importance of collaboration over competition among African countries, the necessity of local context and cultural understanding in AI systems, and the priority of capacity building and education. There was also agreement on the need for institutional mechanisms to evaluate AI systems and the importance of addressing basic development needs.

High level of consensus with significant implications for African AI governance. The agreement suggests a mature, pragmatic approach that prioritizes African agency, collaborative development, and careful evaluation of AI necessity. This consensus provides a strong foundation for coordinated African approaches to AI governance and suggests that African stakeholders are aligned on fundamental principles, even if they may differ on specific implementation strategies.

Differences
Different Viewpoints
Effectiveness of watermarking AI-generated content
Speakers: Professor Jonathan Shock, Audience
Professor Shock argues that watermarking is ineffective because malicious actors will simply choose open source models without watermarks, making it a very short stopgap solution Audience member suggests mandatory watermarks on AI-generated media could help people identify AI-generated content and be less inclined to believe it
Professor Shock believes watermarking is futile due to availability of non-watermarked alternatives, while audience member sees it as a viable solution for combating disinformation
Prioritization of existential AI risks vs immediate practical risks
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock
Ambassador Tigo argues that for Africa, existential risk should be redefined away from science fiction scenarios like AI pressing nuclear buttons, focusing instead on real threats to democracy and social harmony Professor Shock acknowledges existential threats are important to study but emphasizes immediate threats like misinformation campaigns and breakdown of trust in society
Both recognize different types of risks but disagree on which should receive priority attention – Ambassador Tigo dismisses traditional existential risk scenarios as irrelevant to Africa, while Professor Shock maintains they are important to study alongside immediate threats
Value of digital exclusion
Speakers: Mark Gaffley, Dr. Chinasa Okolo, Ambassador Philip Tigo
Mark Gaffley presents digital exclusion as potentially valuable, suggesting the digitally excluded may retain cognitive abilities and creativity that become relied upon as others experience mental arrest from AI dependence Dr. Chinasa Okolo and Ambassador Philip Tigo focus on using AI to address digital divides and accelerate development rather than viewing exclusion as beneficial
Mark Gaffley offers a contrarian philosophical view that digital exclusion might preserve human capabilities, while other panelists see digital inclusion and AI adoption as necessary for development
Unexpected Differences
Role of competition vs cooperation among African countries
Speakers: Ambassador Philip Tigo
Ambassador Tigo strongly argues against competition between African countries on AI, expressing frustration at wasteful competition and emphasizing need for collective effort
This was unexpected as no other panelist explicitly advocated for competition, yet Ambassador Tigo’s passionate response suggests this is a significant ongoing issue in African AI development that others may not have directly addressed
Necessity of AI adoption vs maintaining alternatives
Speakers: Mark Gaffley, Ambassador Philip Tigo
Mark Gaffley advocates for questioning whether AI is actually necessary and maintaining analog alternatives for those who cannot access digital systems Ambassador Philip Tigo argues that governments face pressure to adopt AI because young populations are already using these tools, making rational choices about non-adoption difficult
This disagreement was unexpected as it reveals a fundamental tension between cautious, inclusive approaches versus pragmatic responses to technological inevitability that wasn’t explicitly debated but emerged through their different perspectives
Overall Assessment

The panel showed remarkable consensus on major goals (African agency, capacity building, avoiding digital colonialism) but revealed subtle yet significant disagreements on implementation approaches, risk prioritization, and the pace of AI adoption

Low to moderate disagreement level with high strategic implications – while speakers largely agreed on desired outcomes, their different approaches to achieving these goals could lead to conflicting policy recommendations and resource allocation decisions across African countries

Partial Agreements
All agree on the need for African oversight and evaluation of AI systems, but disagree on mechanisms – Ambassador Tigo focuses on access and government accountability, Professor Shock on transparency and human involvement, Dr. Okolo on institutional capacity building
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock, Dr. Chinasa Okolo
Ambassador Tigo emphasizes need for African scientists to have access to AI models for evaluation and governments to hold companies accountable Professor Shock advocates for transparent, human-in-the-loop systems rather than black box solutions Dr. Chinasa Okolo argues for establishing AI safety institutes to provide independent evaluation capacity
Both agree that development priorities should guide technology adoption, but disagree on approach – Dr. Okolo favors basic infrastructure over AI solutions, while Ambassador Tigo sees AI as a tool to accelerate traditional development
Speakers: Dr. Chinasa Okolo, Ambassador Philip Tigo
Dr. Okolo argues that simple non-AI solutions like building hospitals and paying teachers may be more appropriate than complex AI systems for many development problems Ambassador Tigo argues that AI should be used to optimize development and accelerate infrastructure projects, leveraging technology for non-sensitive capabilities
Takeaways
Key takeaways
Safe and trusted AI for Africa must prioritize building local capacity over creating dependency, avoiding digital neocolonialism where value is extracted while leaving African institutions as mere users Immediate AI risks for Africa include misinformation/disinformation campaigns targeting elections and vulnerable groups, rather than distant existential risks from rogue AI systems African participation in AI development requires access to models for evaluation, capacity building for scientists and policymakers, and development of local AI systems that understand African contexts and languages Cooperation rather than competition between African countries is essential for effective AI governance, leveraging existing grassroots organizations and shared computational resources AI integration into critical infrastructure requires careful evaluation of necessity, transparent procurement processes with safety benchmarks, and maintaining human-in-the-loop decision making Educational initiatives are crucial as 75% of South Africans know little about AI, with most learning through informal channels like social media AI should be used strategically to accelerate development priorities like connectivity, electricity, and literacy rather than as an end in itself
Resolutions and action items
Launch of the African Compute Initiative at University of Cape Town to provide shared computational resources across African universities Release of an online MOOC by the Center for Global AI Governance offering free AI education content with relatable African imagery and caricatures Continued scholarship programs prioritizing African women through the Women in Focus series Development of playbooks and negotiation tools to help African policymakers engage with large tech companies Establishment of AI safety institutes or similar independent evaluation bodies within African governments Integration of safety benchmarks and audit requirements into government AI procurement processes
Unresolved issues
How to effectively hold trillion-dollar tech companies accountable when African countries have much smaller GDPs and limited regulatory leverage Addressing the digital divide where 64% of Africans lack internet access while advancing AI development Balancing the pressure from young populations already using AI tools with the need for careful, regulated implementation in government systems Creating comprehensive AI incident databases that adequately capture African contexts and harms Developing formal advocacy pathways for civil society to influence AI policy across diverse African political systems Moving from AI strategies (which exist) to actual implementable AI policies (which are largely missing) Ensuring watermarking and other technical solutions remain effective against malicious actors using open-source models
Suggested compromises
Using market pressure rather than just regulatory pressure to influence tech companies, leveraging Africa’s significant user base (e.g., Kenya being the biggest ChatGPT user) Maintaining analog alternatives alongside AI systems to ensure inclusion of those who cannot access digital solutions Focusing AI safety efforts on immediate, contextually relevant risks rather than distant existential threats while still supporting some research on long-term risks Combining civil society and government efforts rather than maintaining traditional separation, given the existential nature of AI risks to the continent Prioritizing local private sector partnerships over global big tech to maintain domestic legal jurisdiction and control Using AI strategically to optimize existing development challenges (energy, infrastructure) rather than implementing AI for its own sake
Thought Provoking Comments
I think the first part of this conversation is that largely that if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency especially for a continent that is still trying to aspire is a problem if AI systems are extractors of African data if capturing our African markets and there’s a concentration of value outside the continent while leaving our institutions as mere implementers or users then I think for me as I said it’s digital neocolonialism
This comment reframes AI safety from a uniquely African perspective, moving beyond technical risks to focus on economic sovereignty and human agency. It introduces the powerful concept of ‘digital neocolonialism’ and positions dependency vs. capacity-building as a central tension.
This comment established the foundational framework for the entire discussion, with subsequent speakers consistently returning to themes of agency, empowerment, and African-specific risks. It shifted the conversation away from Western-centric AI safety concerns toward contextually relevant issues.
Speaker: Ambassador Philip Tigo
Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point. Because AI is not ICT. It’s not about who’s going to build the best data centers. You know, who’s going to do X or Y. This is a collective all-in effort. I think for me, that’s the biggest shift that we need to make. That it’s not about competition. It’s about cooperation and collaboration.
This passionate interjection cuts through diplomatic language to address a fundamental strategic error. The raw emotion (‘stop being an ambassador’) and the clear distinction between AI and traditional ICT infrastructure reveals deep frustration with current approaches and offers a paradigm shift toward collaboration.
This comment created a turning point in the discussion about regional cooperation. It prompted other panelists to provide concrete examples of collaborative initiatives and reinforced the theme that African countries must work together rather than compete for scraps from global tech companies.
Speaker: Ambassador Philip Tigo
These findings may reveal that African populations are some way away from being able to define what they want from AI, because quite simply the majority of citizens are unaware that technology even exists. This drives the need for creating awareness and educating our peers on AI, so that when the time does come to interact with it, they can make informed and meaningful decisions about what they want.
This comment introduces a fundamental prerequisite that challenges the entire premise of the discussion – how can people define what they want from AI if they don’t know it exists? It grounds the theoretical discussion in empirical data (75% of respondents knew very little about AI) and identifies a critical gap in the pathway to agency.
This shifted the conversation toward the practical foundations needed before higher-level policy discussions can be meaningful. It influenced subsequent discussions about capacity building and highlighted the importance of public education as a prerequisite for democratic participation in AI governance.
Speaker: Mark Gaffley
But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent. So there’s already no mechanism to do this. And that’s AI in general. We’re not even talking specific about safety… So we have to even redefine what existential risk for Africa on AI means. And I think this is where we really have to break from that. And we can have a few of our scientists doing the existential risks models, models running rogues and science fiction. I think that’s important work. But the risk that he’s mentioned is real, right? Threats to democracy, threats to harmony of society are real risks.
This comment makes a crucial distinction between strategies (aspirational documents) and policies (actionable frameworks) while challenging the global AI safety discourse. It argues for redefining ‘existential risk’ in African contexts – from sci-fi scenarios to immediate threats to democracy and social harmony.
This comment fundamentally reoriented the discussion about AI risks and safety priorities. It validated Professor Shock’s earlier points about misinformation while establishing a hierarchy of risks that puts immediate, contextually relevant threats above speculative future scenarios. This influenced how other panelists framed their subsequent responses about practical safety measures.
Speaker: Ambassador Philip Tigo
I’ve given an example Kenya is the biggest user of charge GPT and the first user of charge GPT is emotional advice so you’re asking that’s real data so you’re asking a model for emotional advice that doesn’t understand your context so what does that mean
This specific, data-driven example powerfully illustrates the abstract concept of cultural misalignment. The image of Kenyans seeking emotional advice from a culturally blind AI system makes the risks tangible and immediate, moving beyond theoretical discussions to real human impact.
This concrete example gave weight to all the previous abstract discussions about context and cultural understanding. It provided other panelists with a clear reference point for discussing the importance of local context in AI systems and reinforced the urgency of the capacity-building discussions.
Speaker: Ambassador Philip Tigo
I think there’s something else which we have to be very aware of, which is happening right now… And that’s misinformation and disinformation… To me, in the short term, that’s really worrying. I think it’s quite difficult to talk about the long term… there are things that are real that are happening now that we have to worry about and try to mitigate.
This comment introduces a temporal framework that prioritizes immediate, observable risks over speculative future threats. It also brings gender-based violence into the AI safety discussion, expanding the scope beyond traditional technical concerns to include social justice issues.
This established the short-term vs. long-term risk framework that other panelists, particularly Ambassador Tigo, built upon. It also introduced the theme of technology-facilitated gender-based violence, adding a social justice dimension to the safety discussion that influenced how other panelists framed empowerment and agency.
Speaker: Professor Jonathan Shock
Overall Assessment

These key comments fundamentally shaped the discussion by establishing an African-centric framework for AI safety that diverges significantly from Western discourse. Ambassador Tigo’s interventions were particularly influential, introducing concepts like digital neocolonialism and redefining existential risk, while his emotional plea for collaboration created a turning point that influenced how other panelists discussed regional cooperation. Mark Gaffley’s empirical grounding about public awareness provided a reality check that influenced discussions about capacity building, while Professor Shock’s focus on immediate risks created a temporal framework that other speakers adopted. Together, these comments moved the conversation from abstract global AI safety concerns toward concrete, contextually relevant challenges facing African communities, creating a more grounded and actionable discussion about pathways forward.

Follow-up Questions
How can Africa develop comprehensive AI incident databases that capture harms specific to the continent?
Current AI incident databases don’t adequately capture African contexts – searching for ‘Africa’ redirects to ‘African American’ content, indicating a gap in documenting AI harms on the continent
Speaker: Dr. Chinasa Okolo
What are the formal advocacy pathways available across different African countries for AI policy influence?
She acknowledged not being aware of similar advocacy pathways in African countries compared to the US system, highlighting a need to map out these mechanisms
Speaker: Dr. Chinasa Okolo
How should Africa redefine ‘existential risk’ in the context of AI to focus on locally relevant threats?
He emphasized that traditional AI existential risks may not be relevant to Africa, and the continent needs to define what existential risk means in their context – focusing on threats to democracy and social harmony rather than science fiction scenarios
Speaker: Ambassador Philip Tigo
What mechanisms can ensure African scientists get access to frontier AI models for evaluation and safety research?
He noted that African countries are major users of AI systems but lack access to evaluate these models, which is essential for conducting safety assessments relevant to their contexts
Speaker: Ambassador Philip Tigo
How can African governments build capacity to negotiate effectively with trillion-dollar AI companies?
He highlighted the power imbalance between African governments and major tech companies, suggesting need for negotiation tools, playbooks, and guidebooks
Speaker: Ambassador Philip Tigo
What would effective AI procurement guidelines look like for African governments?
He suggested that procurement is a key leverage point where safety benchmarks and audit requirements can be included, but indicated this isn’t being done effectively currently
Speaker: Ambassador Philip Tigo
How can the effectiveness of mandatory watermarking for AI-generated content be evaluated as a solution to misinformation?
This was raised as a potential technical solution to combat AI-generated misinformation, though Professor Shock expressed skepticism about its long-term effectiveness
Speaker: Audience member
How can AI advancement be leveraged to bridge rather than widen the digital divide in Africa?
With 64% of Africa lacking internet access, there’s concern that AI advancement could exacerbate digital exclusion rather than promote inclusion
Speaker: Audience member
What would AI safety institutes or evaluation bodies look like when adapted to African contexts and needs?
She referenced the US model embedded in NIST but noted that African versions would need to be designed differently to align with local needs and values
Speaker: Dr. Chinasa Okolo
How can the Africa AI Council complement UN AI governance initiatives effectively?
She mentioned looking forward to seeing how this regional body would interact with global UN AI governance efforts, suggesting need for clarity on coordination
Speaker: Dr. Chinasa Okolo
What are the most effective models for continental AI collaboration that move beyond competition?
He emphasized the need to shift from competitive to collaborative approaches but the specific mechanisms for achieving this continental cooperation need further exploration
Speaker: Ambassador Philip Tigo
How can open-source AI development be strategically leveraged to build African AI capacity and reduce dependency?
He suggested building African models from open source as an alternative to dependence on major AI companies, but the practical implementation pathway needs development
Speaker: Ambassador Philip Tigo

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Shaping AI’s Story Trust Responsibility & Real-World Outcomes

Shaping AI’s Story Trust Responsibility & Real-World Outcomes

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion at the AI Impact Summit focused on achieving responsible AI deployment through trust, accountability, and sustainable scaling across public and private sectors. The panel, moderated by Mridu Bhandari, included representatives from the Australian government, First Abu Dhabi Bank, Ericsson, and Wipro, exploring how to balance AI innovation with risk management and public trust.


Paul Hubbard from Australia’s Department of Finance emphasized that trust is foundational to AI innovation rather than a barrier, advocating for a people-first approach that meets citizens where they are and ensures democratic participation in AI deployment. Erik Ekudden from Ericsson discussed the evolution of networks from passive carriers to intelligent fabrics, highlighting how infrastructure must become an active enabler of AI applications, from consumer devices like AI glasses to industrial applications requiring distributed inference capabilities.


Divyesh Vithlani from First Abu Dhabi Bank shared their platform-first approach to AI governance, treating AI agents similarly to human employees with performance management, training programs, and accountability measures. He stressed that in banking, trust is existential rather than philosophical, requiring robust governance frameworks built into AI platforms from the ground up. Hari Shetty from Wipro advocated for “proof over promise,” emphasizing problem-first thinking rather than model-first approaches, and highlighted the importance of enterprise-ready solutions that work consistently over time.


The discussion revealed consensus that AI risks are manageable rather than insurmountable, requiring appropriate guardrails and governance structures. Panelists agreed that measuring AI success should extend beyond productivity metrics to include business transformation, decision velocity, and competitive advantage. Looking ahead, they envisioned an AI-native future where seamless integration of human and artificial intelligence transforms how we work, bank, and interact with technology, provided that trust and inclusion remain central to development efforts.


Keypoints

Major Discussion Points:

Building Trust as a Foundation for AI Innovation: The panel emphasized that trust isn’t opposed to innovation but rather enables it. Paul Hubbard stressed the importance of democratic, participatory approaches that meet people where they are, while Divyesh Vithlani discussed embedding governance and ethical AI principles directly into platform architecture to ensure safe, scalable deployment.


Infrastructure Evolution and Intelligent Networks: Erik Ekudden highlighted the transformation from passive AI carriers to active enablers, discussing how 5G/6G networks are becoming an “intelligent fabric” that hosts distributed AI workloads. The conversation covered practical applications like AI glasses and the need for networks to provide security, trust, and real-time responsiveness for emerging technologies.


Accountability and Governance in an AI-Driven World: The discussion explored who is responsible when AI agents make decisions, with panelists agreeing that accountability must reside with the domain providers. Vithlani described treating AI agents like human employees with performance management, onboarding/offboarding processes, and hierarchical responsibility structures.


Moving from AI Pilots to Scalable Value Creation: Hari Shetty emphasized “proof over promise,” advocating for problem-first thinking rather than model-first approaches. The panel discussed measuring ROI beyond productivity metrics, focusing on business transformation, decision velocity, and treating AI as fundamental infrastructure rather than optional technology.


Future Vision and Responsible Scaling: Looking ahead to 2030, panelists envisioned seamless AI integration across banking, networks, and daily life, with emphasis on inclusive deployment and maintaining human control. They stressed the importance of public-private partnerships and cross-sector collaboration to ensure AI benefits reach all segments of society.


Overall Purpose:

The discussion aimed to address how organizations and governments can achieve responsible AI deployment at scale while maintaining trust, accountability, and public benefit. The panel sought to move beyond theoretical frameworks to practical implementation strategies, focusing on the “seven chakras of aligned global cooperation” (human capital, inclusion, trust, resilience, science, resources, and social good) to translate AI ambitions into accountable action.


Overall Tone:

The discussion maintained an optimistic yet pragmatic tone throughout. Panelists were enthusiastic about AI’s transformative potential while acknowledging real challenges around trust, governance, and risk management. The conversation was collaborative and solution-oriented, with speakers building on each other’s insights rather than debating opposing viewpoints. The tone remained consistently forward-looking and constructive, emphasizing practical implementation over theoretical concerns, and concluded with shared excitement about the possibilities ahead while maintaining focus on responsible deployment principles.


Speakers

Speakers from the provided list:


Mridu Bhandari – Moderator from Network18


Paul Hubbard – First Assistant Secretary for AI Delivery and Enablement at the Department of Finance in the Australian Government; Public Policy Economist; Self-described “AI Masked Economist”


Divyesh Vithlani – Group Chief Technology and Transformation Officer, First Abu Dhabi Bank


Erik Ekudden – Chief Technology Officer of Ericsson


Hari Shetty – Strategist and Technology Officer at Wipro


Additional speakers:


– No additional speakers were identified beyond those in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion at the AI Impact Summit brought together leaders from government, telecommunications, banking, and consulting to address how to achieve responsible AI deployment at scale whilst maintaining trust, accountability, and public benefit. The panel, moderated by Mridu Bhandari from Network18, explored practical implementation strategies for what Bhandari termed the “seven chakras of aligned global cooperation” – human capital, inclusion, trust, resilience, science, resources, and social good.


Trust as the Foundation of Innovation

The discussion began with a fundamental reframing of the relationship between trust and innovation in AI deployment. Paul Hubbard, First Assistant Secretary for AI Delivery and Enablement at Australia’s Department of Finance, challenged the conventional wisdom that positions these as competing priorities. Drawing from his background – including starting a podcast during COVID to explain economics and unpack jargon – Hubbard emphasised that trust enables rather than hinders innovation.


“It’s really important that we don’t frame it as like trust versus innovation,” Hubbard argued. “It’s actually a foundation of trust that lets you make the innovation.” This perspective established a framework where democratic participation and people-first approaches become enablers of technological progress, with emphasis on meeting citizens where they are in terms of their comfort with AI.


Divyesh Vithlani, Group Chief Technology and Transformation Officer at First Abu Dhabi Bank, reinforced this theme from a banking perspective: “It’s not either or. It’s not about you have trust or you have productive AI. Our business in banking relies 100% on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction.”


Infrastructure Evolution for AI-First Applications

Erik Ekudden, Chief Technology Officer at Ericsson, provided insights into how network infrastructure must evolve to support AI applications. Rather than viewing networks as passive carriers, he described transformation towards what he called an “intelligent fabric” – networks that actively enable and host AI workloads.


“The network is already secure, trusted. It’s going to be a carrier of all these inference workloads,” Ekudden explained, highlighting how 5G and emerging 6G networks become the foundation for distributed AI applications. He illustrated this with practical examples like AI glasses providing real-time navigation and language translation – applications requiring network-based processing rather than on-device computation.


This shift from centralised training to distributed inference represents a fundamental architectural change. Ekudden projected scaling challenges involving billions of users and sensors, necessitating networks that provide tailored service quality and security for each application. On sustainability concerns, he argued that focus on energy-intensive training phases obscures the more efficient reality of distributed inference: “We’re not going to explode energy consumption just because we use more AI.”


Human-Centric AI Governance Models

Vithlani outlined an innovative approach to AI governance that treats artificial agents with human-like management frameworks. “I view an agent no different to a human so you do performance management,” he explained, describing systems including “agent university” for training and graduated autonomy based on demonstrated capabilities.


This governance model extends to operational management including performance monitoring and conflict resolution between agents and humans. “Whilst humans may fill out a timesheet to account for the work that they’ve done… we’re also monitoring the agent for the tokens that they’ve consumed for the output that they’ve generated,” creating parallel accountability structures.


The approach involves treating AI agents like new graduates – providing appropriate guardrails and supervision that evolve with experience and competence, ensuring human oversight whilst enabling scalability for large organisations.


Moving Beyond Pilots to Production Value

Hari Shetty, Strategist and Technology Officer at Wipro, addressed the persistent challenge of moving from pilot projects to scalable, production-ready solutions. His emphasis on “proof over promise” provided a framework for organisations struggling beyond perpetual experimentation.


“AI is no longer about pilots. It’s about being able to get value out of AI,” Shetty declared, outlining key principles for successful scaling. First, organisations must adopt problem-first thinking rather than model-first approaches – identifying business challenges before selecting AI technologies. Second, enterprise AI faces fundamentally different challenges than consumer applications: “Enterprises are necessarily messy,” requiring solutions that work within existing constraints.


Third, solutions must work reliably “every day, every hour, and every minute” rather than just in controlled environments. Finally, “agentic trust is earned” through consistent performance over time, treating trust as an outcome of reliable operation rather than a prerequisite.


Measuring AI Value Beyond Traditional ROI

The panel revealed sophisticated thinking about measuring AI success beyond traditional return-on-investment calculations. Shetty provided a provocative perspective, comparing AI adoption to foundational technologies: “It’s almost like going back in time – could you ask should we implement an email system, what’s the ROI on the email system?”


Vithlani outlined a three-tier approach to value measurement. At the micro level, AI provides productivity improvements through co-pilot technologies. At the enterprise level, AI transforms complex, error-prone processes with tangible financial impact. Most strategically, AI provides competitive advantage through enhanced organisational responsiveness – the ability to “respond and react to change faster than our competitors.”


Ekudden reinforced this multi-dimensional approach, noting that whilst efficiency gains of 10-50% represent significant value in telecommunications, the most exciting opportunities emerge when AI enables entirely new business models and revenue streams.


Risk Management: Balanced and Practical Approaches

The panel demonstrated consensus that AI risks are manageable through appropriate frameworks rather than representing insurmountable obstacles. Shetty provided direct assessment: “There is certainly a level of risk that one should be aware of and work with… but at the same time the hype about risk is also overstated. It’s a manageable risk, it’s not an uncontrolled, unmanageable risk.”


Different sectors show varying risk tolerance levels, with Ekudden noting that enterprise risk assessment has become “quite realistic” whilst government sectors may still overestimate risks. Vithlani contributed that AI risks can be managed by extending existing regulatory frameworks: “AI is not actually new technology,” noting that AI concepts predate cloud computing and mobile technology.


The discussion revealed nuanced thinking about contextual risk tolerance, with Shetty describing scenarios where 85% accuracy might be acceptable for certain processes whilst others require 99.99% reliability.


Accountability in Distributed Systems

As AI systems become more distributed and autonomous, the panel advocated for clear domain-specific responsibility rather than overarching accountability frameworks. Ekudden articulated this approach: “If you are replacing work with an agent, that basically needs to translate into accountability and then also transparency, trust and governance issues around those agents.”


He described a hierarchical model where different agent levels have different decision-making authority and corresponding accountability structures. This draws from existing practices in critical infrastructure management, where telecommunications networks already provide life-critical services with established safety guardrails.


Hubbard emphasised government accountability through clear communication and comprehensive planning – demonstrating how AI will create better jobs, attract investment, and spread benefits broadly whilst keeping citizens safe.


Future Vision and Cross-Sector Collaboration

Looking toward 2030, panellists provided concrete predictions about AI transformation. Shetty projected that “the decision velocity in organisations will completely change in the next four years,” with AI accelerating processes so dramatically that current speeds will seem “intolerably slow.”


Vithlani envisioned seamless AI integration where “banking will be a lot more seamless” and shopping becomes intuitive. Ekudden projected “digital colleagues” and “physical AI colleagues” working alongside humans, though acknowledging uncertainty about achieving comprehensive transformation globally within four years.


Throughout the discussion, panellists emphasised that realising positive AI visions requires unprecedented collaboration across sectors. Hubbard highlighted Australia’s AI CoLab as an example of effective cross-sector collaboration, bringing together government, private sector, academics, and not-for-profits.


The collaborative imperative extends beyond technical considerations to social and ethical dimensions, with Hubbard noting successful AI initiatives require “having the people who are going to be in the room who may not care about AI, but they do care about the services that are being delivered.”


Conclusion

The discussion revealed remarkable convergence across sectors on fundamental principles for responsible AI deployment: trust as foundational to innovation, manageable AI risks through appropriate frameworks, problem-first thinking, and inclusive approaches ensuring broad social benefit.


The conversation moved beyond abstract AI potential to concrete, actionable frameworks – Vithlani’s governance models, Shetty’s scaling principles, Ekudden’s network architecture, and Hubbard’s collaborative approaches provide practical guidance for organisations seeking to harness AI’s potential whilst maintaining safeguards.


The panellists’ shared optimism about AI’s transformative potential was balanced with realistic acknowledgement of required work. Their insights suggest that whilst technical capabilities for transformative AI applications are emerging rapidly, realising full potential depends on building trust, governance frameworks, and collaborative relationships to ensure AI serves humanity’s broader aspirations for progress and social good.


Session transcriptComplete transcript of the session
Mridu Bhandari

for shaping a sustainable AI future that we are calling People, Planet and Progress. And to translate these sutras into action, we are looking at what we call the seven chakras of aligned global cooperation. So these are the concrete pillars that will really turn ambition into accountability. We have human capital, inclusion, trust, resilience, science, resources and social good as the seven chakras that we are going to be talking about. Today we have with us a very eminent panel trying to answer the defining question of this AI first decade that we are in. How can we achieve trust before skill? Outcomes over optics and responsibility as a competitive advantage. I’m Vipi Bhandari from Network18 and I’m very delighted to be joined by a panel of very distinguished guests here tonight.

Starting from my left, Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Next to them, Vibhesh Vitlani, Group Chief Technology and Transformation Officer, First Abu Dhabi Bank. Eric Ekudin, the Chief Technology Officer of Ericsson. And Harish Yatich, Strategist and Technology Officer at Wipro. Welcome, gentlemen. Thank you so much for joining us here today. You know, perhaps let’s set the context with the foundations of trust and skill. And Paul, if I may start with you first, you know, I was going through your LinkedIn profile and you call yourself the AI masked economist.

So very interesting, Monica, there. Why don’t you first tell us what that really means? And then we’ll jump into the rest of the stuff.

Paul Hubbard

Thanks for having me. It’s great to be here in India. I think we all bring a mic. Yeah. Thanks for having me, and it’s great to be here in India. I think we all bring a lens to AI. My lens that I bring is economics. I’m a public policy economist, which for me means AI is not about technological adoption. It’s all about what can generate public value, what generates public welfare.

Mridu Bhandari

And why do you call yourself the masked economist?

Paul Hubbard

Economist. That’s another story for you. That started in COVID, remember, when we were all wearing masks. And at the time, I started a podcast, which was all about explaining economics and unpacking the jargon. And I’ve kept that because I think explaining AI, unpacking the jargon, seeing how it relates to everyday life is really, really important.

Mridu Bhandari

Right. Now, when we talk about AI for social good, public permission is really, really important. Public trust is very important. Now, how do we really build society? How do we build confidence in AI without really slowing down? innovation. How are you doing that in Australia? Give us some examples of how you’ve been able to do that, especially because citizens all over the world today are demanding a lot more transparency and accountability when it comes to not just AI, but everything in general.

Paul Hubbard

Yeah, absolutely. I think it’s really important that we don’t frame it as like trust versus innovation. It’s actually a foundation of trust that lets you make the innovation. It’s starting from the proposition of what’s the problem we’re trying to solve or what are we trying to deliver for citizens? If you’re a government, what are you trying to deliver for your customers? Meet them where they’re at. Now, different countries, different populations have different comfort already, different familiarity with AI. You’ve got to know where people are up to, what they want and build from there, rather than just say, here’s a brand new thing that we’re going to impose on you. So I think really that framing, that democratic participatory.

approach, that people -first approach is key.

Mridu Bhandari

Right. Eric, coming to you, it’s often discussed as the application law, but you’ve mentioned that intelligence must be embedded into the networks themselves. Now, how does infrastructure really evolve from being a very passive carrier of AI to becoming this active enabler of trust and of resilience?

Erik Ekudden

Yeah, so first of all, Ericsson builds networks, advanced connectivity, so 5G and 6G, and increasingly that’s becoming this fabric that we all depend on. But let’s start by thinking about what people are using today. Gen AI is already hundreds of millions of smartphones, actually billions, already doing AI applications across the mobile infrastructure. So it’s already secure and trusted. The network is already provided the guarantees that you need. But I think Especially here in India, we’re talking about industrial AI applications, agriculture. There’s going to be a lot of AI in the fields, hospitals, education, smart manufacturing. So there’s going to be a lot more dissemination of AI from where we’re focused today in training to distributed AI or inference generation.

That’s going to happen much further out in the network. So the network is actually becoming the host for all those great AI experiences. We need to scale the networks to handle that. I don’t think I’m the only one. Maybe not everyone carries two pairs of glasses here, but AI glasses. They are already available in millions. Good AI glasses that give you navigation support, that gives you real -time language translation, maybe a prompt if you are on a stage making a keynote. I mean, these kind of things, they cannot be done on the device, on the wearables. You need to offload the AI, the inference from the glasses. So you can see the actual data. You can see the actual data.

You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. edge. That’s why we talk about this as a transition to an intelligent fabric. The network is already secure, trusted. It’s going to be a carrier of all these inference workloads. So we’re just starting that journey. But I think it really comes back to basic principles. Networks need to be trusted. They need to be secure. They’re already moving from consumers into enterprise and government services, mission critical, big example here in India.

Mridu Bhandari

So what have the AI glasses been doing for you this week?

Erik Ekudden

I didn’t read every question because everything is perfect in India when it comes to finding new ways. On a serious note, I actually use them privately at work. But I start to see people getting really … good value because it is an AI assistant. And think of it once, especially like me, wearing glasses. Once I’ve switched for good to these glasses, why would I go back? Even when I’m indoor, even when I’m at home, even when I’m training or in the elevator, I want it to work. And that, of course, means that the network, this intelligent fabric, needs to be so much better than it is today. Of course, great 5G networks here in India, but in the future, we will need even better funds.

And I think this is a change in terms of we will not get the full value of AI. We will not leverage AI fully until we connect it to that better network for AI. And that’s really what I’m focusing on. But you want to try it on? It’s a good one. No, it’s a great one. It’s a little bit fantastic. That was a little bit of a gallop, yeah. Using AI or AR wearables, glasses. Earpods. Cameras. Cameras. A few? Okay, well, two, two. Probably a representative crowd here. I think we are very early in this journey. It’s going to be a fantastic journey, I believe, for both consumers and anyone of us working in companies.

Mridu Bhandari

Absolutely, absolutely. Well, I’m going to come back to you with the knowledge that bringing the wish in, you know, in banking now, trust is not philosophical, it is existential. So how do you really embed AI into core decision making and also ensuring to dilute any risk discipline? So what governance models have you put in place that actually work for you? You know, any best practices that you can share with us here today?

Divyesh Vithlani

Sure. Well, first of all, it’s great to be here. And I’m already, you know, benefiting from the wisdom of my panelists, because my kids will tell you that I’ve been in denial about, or needing glasses. The eyesight is perfect, but the enlarging, the zooming really helps. But in reality, now I’ve got a different story for them, that I’ve been waiting for AI glasses before I really don on apparel specs. But coming back to your question, I kind of pick up on what Paul said. It’s not either or. It’s not about you have trust or you have productive AI. What we believe is like any regulated institution, that there is no compromise on risks and controls.

Our business in banking relies 100 % on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction. and we have conviction right at the very top of the organization that AI is a force for good. We’ve heard a lot this week about AI being a general purpose technology. I really love what Eric said about AI in the network, and I’ll sort of come to that in a second, because that is a large part of the answer is establishing a platform. But if we take a step back, conventional organization is defined by its people, its processes, and its technology.

And there are all sorts of safeguards, guardrails, controls that have been built in. In the AI world, I think it’s going to be about agents, models, and data. And I think we’re going to have to have the same guardrails, and perhaps the same controls, and the same controls, and the same controls, even more stronger, because it will need AI to oversee and govern AI to sort of be really effective. So the approach we’ve taken is on the basis of the conviction that we have that AI is a force for good, it is a game changer, and it is truly going to transform everything about how we live, work, play, and bank. We want to basically make sure that we empower the entire organization to leverage and scale AI in a safe, secure, efficient, and compliant manner.

Now, the only way, in my opinion, to do that is to take a platform -first approach. Just like Eric said about the network needs to be safe and secure, our AI platform and our agentic platform needs to be safe and secure. So we have taken the approach of building a platform with all the different layers from data, model, knowledge, context, and the use cases that sit on top of that by building ethical, AI, data governance. level governance, the fair and appropriate use of AI into the platform. And by taking that approach, we are able to unleash the power of the technology in the hands of the end users. So just like when you open up Microsoft and start a new Excel, you’re not thinking about is this safe, what’s the underlying architecture.

You’re doing it fairly intuitively. And we’re going to be able to do the same thing with AI, that our folks, our business colleagues, our engineers can use AI as naturally and seamlessly as they do any other task. So taking that platform -first approach is what really is driving our sort of strategy to ensure that we drive AI at scale but with all the right trust and safeguards.

Mridu Bhandari

Right. All right. Bringing in Harry as well, you know, we’ve talked a little bit about public permission. We’ve talked about infrastructure. We’ve talked about governance, security. There’s a final leap. which is from promise to proof. Now, enterprises are, of course, often caught between the AI hype. There is hesitation. You speak a lot about proof over promise. Elaborate that for us. And what really separates scalable AI from perpetual pilots that we keep seeing a lot of enterprises deploying?

Hari Shetty

First and foremost, very happy to be here with this panelist here. And putting on the virtual lens, what do we do? We take Eric’s network, layer in the pro -intelligence on top of it, and provide solutions to Devesh. That’s where we fit it into this entire graph in terms of what we do. Now, coming back to proof over promise, you absolutely brought the most important topic that’s in discussion across the summit here as well. AI is no longer about pilots. It’s about being able to get value out of AI. And when we talk about proof over promise, we talk about four distinct elements that are important from a Wipro perspective. number one don’t start with a model don’t talk about model x or model y and then start start with a model first thinking start with a problem first thinking so you you pick a problem figure out what’s the right approach to solving the problem and then work the way backwards to look at you know what models can actually help you solve the problem so that’s the first approach the second part that we that we take care of is the the enterprise story is very different than the consumer story enterprises are necessarily messy you’ve got technology that’s like 20 years old 30 years old you’ve got different personas you’ve got different security needs uh data is you know in fragments across the organization so the enterprise story is a completely different story than a than a consumer grid story in terms of how how things have to come together from an perspective so in that context our ability to prove a solution in the enterprise world is extremely important for us and when we show it works in an enterprise that’s when other enterprises build trust that’s ready for diffusion and by the way we act as client zero for our solution so if we don’t get it to work in our own enterprise there’s no point talking to any of the clients about implementing the solution the third principle here is about whatever solution we build it’s not about making it work once it should work every day every hour and every minute and solutions that are only capable of you know following that principle are the ones that we actually take it to to take it to the market and that’s another principle that’s extremely important for us and last going back to trust that we all talked about if you look at human trust human trust is earned even agentic trust is earned you need something that can work for a long period in time without hallucination without fundamental flaws in the model so that there’s trust built into it so only when things work consistently over a longer period in time do you build trust and these are the four principles that we use to actually talk about you know proof or promise as what we call the product license

Mridu Bhandari

Right. All right. Well, we’re going to shift gears a little bit and also talk about accountability because we’re talking a lot about architecture. Let’s also talk about who’s accountable for what in an enterprise and perhaps in the society as well. Now, Paul, when we talk about responsible AI at a national level, what does accountability really look like for leaders? Is it about measurement frameworks? Is it about reporting outcomes? Is it about, you know, independent oversight? What are the signals that you need to tell citizens that, you know, this is being deployed in your interest?

Paul Hubbard

Yeah, thanks. I think it’s really about having a clear plan that you can communicate. In our case, that making it clear throughout the economy, throughout government, throughout society, that we’re going to seize the opportunity of AI. That means better jobs. That means investment in data centers and all the things we’ve been talking about. But the second thing is really even perhaps more important is we’re going to spread the benefit of AI, not just to people in the tech center, but to every aspect of community, people in rural areas, to people from marginalized groups, to people who maybe haven’t had the full benefit of current technology now. So spreading that benefit further. And then finally, just making it really clear that we’re also acting at every level, whether it’s businesses or whether it’s government, to keep citizens safe in the process.

We’ve had a big conversation here at a model level about AI safety and AI harms, but we’ve also got to have that conversation in the context of our communities and what does it look like to keep citizens safe there. So I think it’s the whole of. Society leadership piece. It’s not just saying, well, the tech people can look after this from a technical perspective.

Mridu Bhandari

Right. And, you know, ecosystems, of course, are very, very interdependent today. You have cloud providers, you have the telecom networks, you have enterprises. There are decisions flowing across the distributed stack by the second. So it’s really countable.

Erik Ekudden

Yes. I want to build on what you said, the version of the Harry here, and the difference between where we are today and when we are introducing agents at scale. And to me, there isn’t so much a question of who, because if you are replacing work with an agent, that basically needs to translate into an accountability and then also a transparency, trust and governance issue around those agents. And increasingly, we get agents at different levels. There’s super advanced agents at the top. And, of course, you follow down the stack, we get more. fine -grained agents having less knowledge making decisions that are guard -railed in a different way than the top models. So think of this as a hierarchy of decision -making and, of course, accountability.

But to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the responsibility for your part. If your complete service consists of many different agents on the cloud side, on the advanced connectivity side, on the application side, device side, it needs to come together. But, of course, responsibility should reside in the domain that you are providing, and that you are providing to the market, to the customer, to the employees. Then, of course, it’s never as simple as that, but in the world that I come from, in telecom, we’re already providing critical infrastructure. People’s everyday and life depend on it. So we have already guard -raised from a safety -security perspective that we have to move up to.

in today’s world of 5G and telecom. That, to me, should carry over into the, oh, yeah, an identic world. I know there are, of course, discussions about increasing governance, increasing regulation. I think that’s a dangerous way to go because if you regulate before you have innovated, you never know what you will get. But I think if you stay with these basic principles that we do have requirements and we have guardrails in the world we’re coming from, and you translate that more or less one -to -one into the identic world, I think we are on a good starting point.

Mridu Bhandari

Right. And, Vibhish, you know, we are talking about the way this identic, as Kavik said, machine working hand -in -hand. Now, as these identities shift, how should we be rethinking governance? How should we be rethinking trust? And, of course, governance is never static. It’s going to go on. It’s going to keep evolving. So what does dynamic oversight really look like, especially in a very regulated industry like yours?

Divyesh Vithlani

Look, I really love that question because at the end of the day, as a CTO in a bank, I am accountable. I am responsible for the platform that we construct and the output that gets generated from that platform, whether it’s from a human or an agent, right? So that’s my accountability. And this is where I have interesting debates and conversations with colleagues from Wipro and other partners of mine who are very eager to sell me solutions. And I said, if the solution is a black box, then I’m going to find it very difficult to integrate that into my environment because ultimately I have to be able to explain the output that gets generated. So to your question in terms of that dynamic oversight, it again goes back to the platform and the way we’ve architectured.

The platform is on sort of, you know, without getting too technical, is on two planes. There’s an execution plane and a control plane, right? But again, it’s not that sophisticated. just like when you onboard a new graduate into your organization. You will give them a set of guardrails and a set of responsibility that is befitting of their skill set and their experience. You provide the right level of supervision. You give them the right level of oversight. And as they grow, become more proficient, you clearly give them more responsibility. We treat agents in exactly the same way. So there’s a lot of conversation about agents being autonomous and hallucinations. Well, individuals can do the same thing if they’re left to their own devices, right?

So the way that we have built and architected our agentic architecture is that, as Eric said, there are different types of agents. At the lowest level, agents are not just autonomous, but they’re atomic. And with the right set of guardrails, with agentic operating processes, they are also deterministic, right? and we basically create agents to perform a single task. And we make them as reusable as possible to compose them and to aggregate them into a higher level of workflow. And as you get more, as they learn more, which is the good thing about agents, they learn faster, you give them more responsibility, just like you do to humans. But again, going back to that execution plane where you are monitoring every activity that is being done through a control plane, and also the other features of the platform include how we sort of onboard, offboard agents just like you do with humans.

And we also have practices in place to manage the conflicts between agents and humans because, again, just like you have conflicts between two humans, you have conflicts between an agent and a human, right? And you need to be able to detect that in real time. so that’s where some of the kind of work that we’ve done and it’s again early days, I don’t mean we have all the answers, but certainly the space is moving very fast the key is that we humans always have to be in control so the way we design the architecture is to ensure that happens

Mridu Bhandari

so are agents being put through tough performance appraisals, are they being fired for hallucinating?

Divyesh Vithlani

100 % right, and again it may sound really basic, but I view an agent more different to a human so you do performance management you do, there’s a concept that we call agent university right and I love that term because I was chatting earlier with James about this, at university you’re learning how to learn right, so that’s what we want the agents to do as well and you know whilst humans may fill out a timesheet to account for the work that they’ve done and to measure the output that they’ve produced for the cost that they’ve consumed. Whilst agents may not fill out a timesheet, we’re also monitoring and monitoring the agent for the worth, the tokens that they’ve consumed for the output that they’ve generated to ensure that we measure their performance in a similar way.

Mridu Bhandari

Wonderful. Well, Harry, bringing you in as well, how should organizations measure the ROI? That’s a question that enterprises around the world have been debating. What’s the value beyond the profit or beyond the bottom line? Are we looking at trust scores? Are we looking at productivity? Are we looking at decision velocity, risk mitigation? At the core, how are you looking at the ROI?

Hari Shetty

Probably one of the most debated topics and one of the topics that I hear a lot, and I will probably provide you the Wipro context in terms of how we are looking at productivity. point number one while everybody talks about use cases and productivity measurement of EI we think you know EI is beyond just measuring return on investments or measuring productivity it’s almost like going back in time could you ask should we implement an email system what’s the ROI on the email system could you ask for example why should I go to the internet I have a brochure already in the company why should I be on the internet so a lot of the thinking should change from looking at ROI to looking at EI as a fundamental capability and a fundamental shift and a journey which is irreversible in terms of where we are going so it’s not a question of should we invest because there’s ROI or not it’s a question of we have to go down that path and we look at it as a capability so within Wipro we look at it as a capability so we are not really asking this question of for every single use case is there a ROI on it Now, having said that, you know, as a business leader, ROI is extremely important.

Mridu Bhandari

Well, your clients must be demanding the ROI for sure.

Hari Shetty

Yes, that’s equally true. So the elements that we talk about is the earliest signal of ROI is productivity, right? We always talk about productivity as an early indicator of what can come down the pipe, but productivity is only an early signal. The resulting benefit is basically always an end outcome. It can be cost. It can be units produced. It can be lower, better quality. It can be cycle time reduction. It’s many of those things. And our goal has always been to move beyond productivity because productivity is a number that people talk about very frequently in AI, but we are moving beyond productivity to look at some of those end outcomes that we can achieve. And our models are built to help clients understand the end benefit of AI rather than just look at productivity as an element.

Plus scores are becoming equally important. I will just touch upon plus scores for a minute. When we look at plus scores, we are looking at, you know, how many instances, how many instances of failure did happen? How many instances of failure did happen? How many instances of failure did happen? and is that within the vector of what an organization says is acceptable or the process says is acceptable. So it’s important to measure quality aspects, failure aspects, hallucination that we talked about, all the other aspects of AI where it can go wrong and then measure what’s the task goal and see whether it’s appropriate for the process that we’re talking about. So we had situations where we talked about probabilistic models, deterministic models.

We had customer cases where 100 % was the only answer or 99 .99 % was the answer. There are situations where 85 % was good enough. So again, there’s no one single answer to this. It depends on the kind of process, the kind of problem that we’re trying to solve.

Mridu Bhandari

Right. And do you think business innovation would perhaps be one of the biggest ROIs and any outstanding cases of business innovation that you’ve seen with AI being scaled successfully yet?

Hari Shetty

Yeah, that’s a fantastic question. And again, let me give you one or two quick examples because, you know, that would bring this to life. one of the projects that we did for a client this is an energy client and this is for a refinery and obviously you know everything was automated, instrumented, there are a lot of sensors all along the way and they were asking us what’s the value of AI in this context so the work that we did for them was basically analysis of a flame and you know interestingly out of the flame we could extract information about combustion efficiency, fuel to air mixture ratio, maintenance of the equipment we could derive out of models that we built just looking at the flame so the kind of information that we could actually secure just looking at the flame was so much superior to using a sensor based technology because sensors typically tell you something is working or something is not working based on a threshold, here we could actually find out the health of what’s happening with incremental change compared to looking at an on and off kind of a situation with sensors

Mridu Bhandari

Fantastic. Erik, you want to add?

Erik Ekudden

Yeah, can I just add one thing? I think it’s so interesting to look at how in our world we talk about this intelligent fabric of 5G. And, of course, there are gains if you apply AI in terms of efficiency, in terms of productivity. You can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network. They use modeling on top of it. And then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case.

But that’s really where AI and autonomous networks are helping. Saving, yes, TCO is important. But it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah. So, Glasses was one. In the future. device, every application, every service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical for enterprises. And that’s what leading customers, including here in Juba, are doing. So they’re using AI for that. We can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network, they use modeling on top of it, and then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case, but that’s really where AI and autonomous networks are helping.

Saving, yes, TCO is important, but it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah, I think Glasses was one example here. But in the future, every device, every application, every AI service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical, for enterprises. And that’s what leading customers, including here, so they’re using AI for that. that kind of segmentation and growth of the business, it’s an upside that is unlimited. So, of course, it’s more exciting.

Mridu Bhandari

Absolutely. Well, let’s also look at the long -term competitiveness and value creation that we can achieve with AI. Paul, if we were to project 10 years ahead, what do you think would really separate AI native nations from AI dependent nations? You know, is it infrastructure? Is it talent pipelines, compute capacity? What would you add to that list?

Paul Hubbard

I would add capability, competence, and curiosity. I think a lot of the things you mentioned in terms of data centers and things like that, they will be built, there will be investment, but the underlying models, the compute that will be commoditized and what will set countries apart is the ability of government institutions to adapt, the ability of the economy to be flexible to new approaches and to be able to do what they want. I think that’s a really important point. And I think that’s a really important point. And I think that’s a really important point. the ability of the workforce to find the new jobs, the new wants and needs that are created and where the bottlenecks shift, being able to move to those.

And I’ve got to say that coming to India this week, I see not just competence, capability and curiosity, but just a down -out enthusiasm for this. So I think maybe India is one to watch.

Mridu Bhandari

Good to know that and happy to hear that, of course. Because, well, Eric, you know, AI demands massive compute, massive energy, massive connectivity. Now, how do we really reconcile infrastructure -scale AI expansion with sustainability? You know, even with the AI globally, how do we ensure that efficiency is imperative to everything that is deployed?

Erik Ekudden

Well, AI is… …energy -intense, especially now in the training phase. I think some of the data that are out there, it’s… I mean, it’s… mind -boggling numbers and I’m not even sure we’re going to need that kind of energy that has been predicted. But what I was saying before is that we’re moving from that big data center training to the distributed inference. That’s kind of where the book is going. That means that you need to scale it to like 8 billion inference for glasses. Tens of billions of sensors using AI or visual sensors. So what we are doing and what needs to happen is to really have energy -efficient hardware, energy -efficient software, energy -efficient AI models.

Small models when you can do away with that and of course big models when you don’t. So we’re not going to explode energy consumption just because we use more AI. In fact, we’re going to use even smarter and better ways to do it both on the hardware and software side. Then just as a little bit of sort of putting things in perspective, all the world’s networks is around a percent of their total power bill or their power consumption. And it’s actually by using more of the digital technology, you are able to reduce emissions in other sectors by as much as 15%. So it’s kind of a 10, 15 times payback on that energy consumption. And again, if you combine that with what I said about really being conscious about energy efficiency as you move further out, I think it’s actually going to be a sustainable way to do a lot of things, not just replacing unnecessary traveling, logistics chains with more digital means.

Everything is going to be more efficient, so I think we have to be a little bit careful before we say that it’s just exploding and it’s completely outrageous. Because if you just project those big data center training clusters, it looks scary, but that’s not the whole picture.

Mridu Bhandari

All right. Well, Dinesh, you know, while we are talking of value creation from AI, you know, of course, many organizations still accounting and measuring AI success and cost savings, but… at your organization, how are you really reframing AI value in banking, resilience, fraud protection, customer trust, capital efficiency? What are some of the metrics that you are tracking and really ensuring that this is true value creation for us?

Divyesh Vithlani

I think it’s a question that sort of is constantly exercising our minds. And if I start with the productivity question that you asked earlier, whilst there wasn’t a straightforward answer, I can’t look at it in three levels. AI will provide micro -level productivity through, you know, co -pilot and sort of technologies like that, which might be difficult to measure, but certainly it’s helping with the whole literacy and those in the overall level of education and awareness in the organization. Secondly, at the enterprise level, and this is your point on value creation, we absolutely see the potential of AI to drive significant ROI. When you take very complex processes, which have been utilizing HIPAA -2 technologies, whether it’s RPA, OCR, etc., but when you apply AI and agentic, you can actually take them to the next level.

And these are extremely complex processes, which are error -prone, and you’re talking about large sums of money. And when we’ve applied AI and agentic to them, we’ve seen incredible outcomes, which is sort of giving us tangible value creation. And the third aspect I would look at is, if we really take a step back, certainly in banking, what is our biggest source of competitive advantage? it’s not necessarily the technology or the products or any other capabilities, right, because the next person can come along and emulate those. It’s really our ability to respond and react to change faster than our competitors. And that’s what AI is going to help us do in terms of creating value because it allows us to respond to change faster, do rapid experiments, and to scale and to double down where we think that we will see a significant ROI.

Mridu Bhandari

Right. Okay, so I have a question for all of you, and perhaps you can, you know, take about 30 seconds each to tell me. Do you believe today enterprises are overestimating or underestimating AI risk? And, you know, how should leaders and boards really measure AI, AI thrust readiness in practical terms? So, you know, how we may do if you want to start on that one.

Hari Shetty

see there is certainly a level of risk that one should be aware of and work with with risk and again in every business there’s always element of risk that one is to mitigate so ai is no different from that perspective but at same time the own hype about risk is also overstated it’s a manageable risk it’s not a uncontrolled unmanageable risk it’s a manageable risk and with the right kind of tool set that divesh talked about it’s definitely possible to get the best value out of ai without actually exposing oneself to risk

Mridu Bhandari

okay that’s a very diplomatic balanced answer that you give us, Eric what do you think

Erik Ekudden

i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageable i think maybe on the government side there’s still an overestimation on the risk side trying to sort of be too cautious, and that, I think, could hold back in certain public sectors and in other areas. Then the risks are very, very big if you mistreat this extremely powerful technology. So I’m not saying that we’re over the hump, but that’s what I think.

Mridu Bhandari

Paul, you want to take that on, considering, you know, Eric just said that perhaps the public sector overestimates risk. Would you say that for, you know, the government in Australia as well?

Paul Hubbard

I mean, certainly governments have a responsibility to start off probably with a more cautious approach than private sector folk. I’d say there’s a shift between the uncertainty of something new that isn’t quantifiable to actually I understand the risk, and then once you understand the risk, you can manage it. So certainly over the last year or so, and the government of Australia has taken. much more sort of active posture towards AI where embracing, in a sense embracing the risk a little bit more than we were in the past but as we grow the capability as we’ve got the foundation of trust, the guardrails that we need, it means you can actually manage that risk and that’s the key thing.

Mridu Bhandari

All right, Divyesh?

Divyesh Vithlani

Look, with any so -called new technology there is always going to be a level of, you know, fear, uncertainty, doubt but the kind of, the sort of the paradox for me is that AI is actually not a new technology. In fact, it predates cloud, mobile, robotics you know, judging by the lack of I was writing programs at university that that But AI was just well ahead of its time. We needed the cloud to be able to process large amounts of data. We needed the kind of data centers that we’re talking about for the compute, et cetera, for this technology to really come to light. And clearly, as we’ve gone through digital, social, cloud, and data, along the way, we’ve seen many, many regulations around data protection, how best to use cloud, data sovereignty, data residency, et cetera.

So as long as we are not sort of shedding those controls that we’ve already built and making sure that we tighten the guardrails as we deploy AI and deploy AI through a platform -centric approach where you’ve built the necessary guardrails, I think that those risks will be met. And I think that’s what we’ve managed and mitigated. And hopefully what we’ll start to see is that we’ve managed to do that. to see is the benefits of this combined technology will far outweigh the kind of risks and concerns that we’re seeing. The only qualification I would make, and I think that’s been talked about at this conference, is making sure that we do take

Mridu Bhandari

Absolutely. I mean, it has to be inclusive for all, especially in a country like India where, you know, we have divides of many kinds. Well, let’s spend a few minutes trying to look ahead and do some crystal ball gazing. And Eric, if I can come to you, you know, we are entering autonomous networks, embedded intelligence, physical AI from robotics to many, many massive systems. Now, what does an AI mean? An AI is a creative network then look like, say, five years from now, because anything more than five is just… much to envision and how do we get to this mobile and cloud infrastructure where we’re able for that future?

Erik Ekudden

Well, I think we have to look perhaps further out in five years because we’re building something that should work for society in broad terms. But of course, AI is moving super fast and when you ask about AI native, I think that any industry, including the one I represent, is going through major change now. And AI native is not just how you build your products, that they need to be data -driven, they need to learn, they need to be updated all the time. It’s very much about your processes. It’s about how you go to market with that, how you engage with lifecycle management, handling questions, and I think we talked about it in the pre -meeting as well.

There are so many things that are changes in terms of how you build AI native systems that it is a fundamental rework for, I would say, most AI native systems. product, actually service companies as well. So an AI -native world is something that is much more responsive to these fast changes that we talked about. An AI -native network is a network that is responsive to all of these needs. You already mentioned the physical AI, which is just around the corner, humanoids, robots, drones, all the things that are requiring much more tailoring, much more flexibility from that network or the intelligent fabric. So we need to do what I call user experience at scale or massive user experience.

Everything has to have its own and unique requirements met. I think only AI -native networks that are responsive in real time to these needs and adapt and create the best user experience can handle it. So it’s going to be a very different world, very intuitive, judging what works. What we see on the wearable side, but that’s going to be a completely new setup.

Mridu Bhandari

Right. And Paul, you know, as we’re looking ahead, of course, public -private partnerships are going to be key to any kind of success that we’re going to see. Tell us a little bit about AI CoLab and your approach towards, you know, bringing together public institutions, academia, industry, to really advance the practical adoption of AI while also keeping it very transparent and ensuring that public good is at the center of it.

Paul Hubbard

Absolutely. So the AI CoLab is a cross -sector initiative where folk from the government, folk from the private sector, academics, not -for -profits, can get together in one place and often in person to understand things. And I think everybody who’s come to the AI Impact Summit really understands that we can’t do this alone. Like nobody in their silo can solve the problem themselves. We’ve got to get capability from each other. We’ve got to learn from each other. And I think the 300 ,000 people who have been here this week have certainly proven that to be. proven that to be the case. I think that it’s also key to actually doing safe and responsible AI. It’s not just the technical controls or the networks that we have.

It’s having the people who are going to be in the room who may not care about AI, but they do care about the services that are being delivered. They do care about their voice being heard. They do care about the environment around them as well. So he keeps on bringing you back to reframing that. What’s the problem we’re trying to solve? What’s the mission we’re trying to achieve? And I think if we want to talk about impact, that’s the key question.

Mridu Bhandari

Right. All right. Well, let’s also look at the financial angle with Divesh. You know, we’ve talked about open finance and very effective financial ecosystems. What is it really going to take to scale AI to that level, especially in the near and short term, to enable very responsible deployment? And sustain… finance with egg farmers particularly in the Indian context given the complex complexities that we see in this country?

Divyesh Vithlani

So I think it’s going to be a force for good. If I look at banking, I don’t think the core of banking is going to change. However, how we bank, how we drive that experience for our customers is I think going to be transformationally different in the future. Just one example to pick up on your question, if you combine the technology of AI together with say digital assets and stable coins, the ability to move money faster like emails, why is it that it takes three or four days today to clear a cross -border payment, right? Which goes completely against the whole concept of open finance and inclusion. So I think AI together with some of these other is going is going to be a game changer in enabling things like that and really driving experience to be much more natural, much more intuitive than it is today.

Personally, as a CTO, there is a lot of questions about a job is going to go away, et cetera. If you look at sort of in any organization, certainly banks that I’ve worked in, typically the CapEx demand on an annual basis outstrips supply on a ratio of five to one. But if AI can help us change those legacy systems, modernize our platforms, because let’s be honest, 90 % of banks still operate with legacy technologies. There’s very few in the green field. All of those technologies need to be modernized, upgraded, and I think AI, again, is going to be a force for good there. And once we modernize those systems, we’ll again lend itself to connecting more seamlessly through microservices, APIs, without getting into the technical details, through MCPs, et cetera.

So I think that AI, together with some of these other technologies, digital assets, print and data line, I think will drive a very different paradigm in terms of

Mridu Bhandari

Lovely. Very exciting times ahead. Well, Hari, if you were to give a CEO a three -step plan today to really scale responsibly, what would that be? Three things.

Hari Shetty

Okay. Number one, be very clear about what you want to achieve with AI. So have the vision right. Have clear objectives in terms of what you want to achieve with AI. That’s the first part. The second part that I would call out is don’t think about task and task automation. Think about what does AI do to your business? And it’s an operating model shift fundamentally which can actually deliver value. So think big. Think about the operating model shift that will require structural changes, methods of working changes, skill changes, and, you know, it’s a complete change. It’s a complete transformation compared to just being an automation. And third thing, you know, please call Wipro.

Mridu Bhandari

All right. We are about to now imagine that we are at the India AI Impact Summit 2030, just about four years ahead. What has changed today in the way we live, work, and play that didn’t happen perhaps at the time you were here last, which is today? What has changed? Paul, do you want to start with that? And you can go ahead with the imagination.

Paul Hubbard

yeah okay look as an economist it’s very hard to predict the future I think what has changed is there’s a whole bunch of people turning up with job titles that we’ve never even heard of before and they’re telling us about things that people in a bureaucracy or the government only dream about so I think we’ll see a lot more diversity in what people do

Mridu Bhandari

right lots of new jobs and yeah most industry reports suggest that many of the new jobs of the next decade have not been invented yet so absolutely

Divyesh Vithlani

well in four years time we may not be here in person it will be our agents or avatars that are being kind of you know teleported in because the technology through Ericsson’s amazing network has the kind of bandwidth and the latency is improved vastly, and obviously with Wipro’s technology around creating these avatars and these agents. But no, I think, to be serious, I think what will have changed, at least from my perspective, is banking will be a lot more seamless. It will really be about putting the customers first rather than sort of imposing friction that we see today in terms of how financial services works. For instance, we will be shopping much more intuitively. We won’t even know that we need to get a new fridge or a new car.

It will kind of just occur to us naturally, and something will appear on your doorstep that you didn’t even know you needed, but once it arrives, you think, wow, that’s exactly what I needed. The payment’s taken care of. All the servicing is taken care of. So I think that is a near -term reality.

Mridu Bhandari

All right. Eric, Hari, go ahead.

Hari Shetty

couple of things one is I’ll definitely break my glasses and use Eric’s Eric’s glasses more importantly why I think will fundamentally change is the decision velocity good most importantly I think the decision velocity in organization will completely change in in the next four years one of the key things that we always talk in any enterprises our organization is so slow the processes take a lot of time it does not happen at the pace that we all want it to be and the experience that one gets out of it a slow process is not necessarily a great experience process the fundamental problem that AI will solve and I’m pretty sure it will solve in the next couple of years is the velocity of everything will increase so tremendously that we’ll look back and say how did we ever tolerate something that was as slow as what it is today

Erik Ekudden

yeah I I wonder if it’s doable in four years on a global scale. But I hope what we see four years from now is that we have this dissemination, we have diffusion, we have everyone being included in this fantastic journey that AI really, really is about. But I think it hinges on this dialogue that we have here, and it hinges, it’s conditional on the fact that we solve the trust issues. Because these things with security, privacy, we talk about them as things we can solve technically and so forth, but that needs to have fundamental anchoring in how humans behave so that you can really trust these agents, as was mentioned before, and that we put the right constraints on.

If that happens, of course, four years from now, it’s going to be so seamless where we have our digital colleagues or AI colleagues, AI, physical AI colleagues, and so forth, that it’s going to be a complete. It’s a completely different way of looking at work and, of course, how you get help outsourcing. I mean, you’re going to be an agent of something which is much, much bigger than what you’re commanding today. I think it’s an enormous shift.

Mridu Bhandari

absolutely well fascinating times ahead thank you gentlemen for your very very incredible insights that was very very educational and informational for all of us the takeaway for me I think from this conversation is clear that if people planet progress remain our guiding sutras and if we can align all the seven pillars of global cooperation AI is not going to just optimize businesses it is going to redefine competitiveness it is going to rebuild public trust and of course hopefully it will future -proof all our institutions for the decades ahead thank you very much appreciate you all taking the time here and thank you all for being a wonderful audience thank you you Thank you. Thank you.

Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Paul Hubbard
4 arguments171 words per minute1045 words365 seconds
Argument 1
Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are
EXPLANATION
Paul argues that trust and innovation should not be framed as opposing forces, but rather that trust provides the foundation that enables innovation. He emphasizes the importance of starting with what problems you’re trying to solve for citizens and meeting them where they are, rather than imposing new technology on them.
EVIDENCE
He mentions that different countries and populations have different comfort levels and familiarity with AI, so you need to know where people are and what they want before building from there. He advocates for a democratic participatory approach.
MAJOR DISCUSSION POINT
Building Trust and Public Permission for AI
AGREED WITH
Hari Shetty
Argument 2
Government accountability involves having clear plans for seizing AI opportunities while spreading benefits broadly and keeping citizens safe
EXPLANATION
Paul outlines three key aspects of government accountability in AI: having a clear plan to seize AI opportunities for better jobs and investment, spreading AI benefits to all segments of society including rural and marginalized groups, and ensuring citizen safety throughout the process.
EVIDENCE
He specifically mentions spreading benefits not just to people in tech centers but to rural areas, marginalized groups, and people who haven’t had full benefit of current technology. He also emphasizes the need for whole-of-society leadership rather than just leaving it to tech people.
MAJOR DISCUSSION POINT
Governance and Accountability in AI Systems
DISAGREED WITH
Erik Ekudden
Argument 3
Risk management shifts from uncertainty about new technology to understanding and managing quantifiable risks through proper guardrails
EXPLANATION
Paul describes how governments initially take a cautious approach to new technologies due to uncertainty, but as understanding grows and proper guardrails are established, they can move to actively managing quantifiable risks. He notes that Australia has taken a more active posture toward AI over the past year.
EVIDENCE
He mentions that the Australian government has shifted from a more cautious approach to embracing risk more as they’ve grown capability and established foundations of trust and necessary guardrails.
MAJOR DISCUSSION POINT
AI Risk Assessment and Management
AGREED WITH
Erik Ekudden, Divyesh Vithlani, Hari Shetty
DISAGREED WITH
Erik Ekudden
Argument 4
Cross-sector collaboration through initiatives like AI CoLab is essential for solving complex problems and ensuring responsible AI development
EXPLANATION
Paul emphasizes that no single organization can solve AI challenges alone, requiring collaboration between government, private sector, academia, and not-for-profits. He argues this collaboration is key to both capability building and ensuring safe, responsible AI that serves broader societal needs.
EVIDENCE
He references the AI CoLab as a cross-sector initiative where different stakeholders get together, often in person, to understand issues. He mentions the 300,000 people at the AI Impact Summit as proof that collaborative approaches work.
MAJOR DISCUSSION POINT
Future Vision and Transformation
D
Divyesh Vithlani
6 arguments145 words per minute2320 words958 seconds
Argument 1
Public trust in AI must be built through conviction at the organizational level and platform-first approaches with built-in safeguards
EXPLANATION
Divyesh argues that building trust in AI starts with conviction at the top of the organization that AI is a force for good, combined with a platform-first approach that embeds ethical AI and governance controls directly into the technology infrastructure. This allows organizations to unleash AI power while maintaining necessary safeguards.
EVIDENCE
He describes their approach of building a platform with layers from data, model, knowledge, context, and use cases, with ethical AI and data governance built in. He compares it to using Microsoft Excel – users don’t think about safety because it’s built into the platform.
MAJOR DISCUSSION POINT
Building Trust and Public Permission for AI
AGREED WITH
Paul Hubbard
Argument 2
AI governance requires treating agents like employees with performance management, onboarding/offboarding processes, and conflict resolution mechanisms
EXPLANATION
Divyesh proposes managing AI agents similarly to human employees, with graduated responsibility based on capability, performance monitoring, and structured processes for managing conflicts between agents and humans. He emphasizes that humans must always remain in control through proper architectural design.
EVIDENCE
He describes their agentic architecture with execution and control planes, onboarding/offboarding processes for agents, performance management including an ‘agent university’ concept, and monitoring systems that track agent performance similar to human timesheets but measuring tokens consumed versus output generated.
MAJOR DISCUSSION POINT
Governance and Accountability in AI Systems
AGREED WITH
Erik Ekudden
Argument 3
AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change
EXPLANATION
Divyesh outlines a three-tier framework for AI value creation, from individual productivity tools to complex enterprise process transformation, ultimately leading to competitive advantage through increased organizational agility and faster response to market changes.
EVIDENCE
He provides examples of applying AI to complex processes that previously used RPA and OCR technologies, achieving incredible outcomes with large sums of money. He emphasizes that the biggest competitive advantage is the ability to respond to change faster than competitors.
MAJOR DISCUSSION POINT
Measuring AI Value and ROI
Argument 4
AI platform architecture should include execution and control planes with guardrails that allow agents to learn and take on more responsibility over time
EXPLANATION
Divyesh describes a technical architecture approach where AI agents operate within structured guardrails similar to how new employees are managed, with atomic agents performing single tasks that can be composed into higher-level workflows as they prove their reliability.
EVIDENCE
He explains their platform design with execution and control planes, where agents start with limited responsibility and atomic tasks, then are given more responsibility as they learn and prove themselves, similar to how human employees progress in an organization.
MAJOR DISCUSSION POINT
Practical AI Implementation and Scaling
AGREED WITH
Erik Ekudden
Argument 5
AI risks can be managed by maintaining existing data protection and cloud security controls while tightening guardrails through platform-centric approaches
EXPLANATION
Divyesh argues that AI is not actually new technology but has been enabled by advances in cloud, data centers, and compute power. He suggests that existing regulatory frameworks for data protection and cloud security can be adapted for AI with additional guardrails built into platform-centric deployments.
EVIDENCE
He notes that AI predates cloud, mobile, and robotics technologies, and that regulations around data protection, cloud usage, data sovereignty, and residency already exist. He emphasizes not shedding existing controls but tightening guardrails through platform-centric approaches.
MAJOR DISCUSSION POINT
AI Risk Assessment and Management
AGREED WITH
Paul Hubbard, Erik Ekudden, Hari Shetty
Argument 6
AI will transform banking by making financial services more seamless and intuitive while enabling faster system modernization
EXPLANATION
Divyesh envisions AI transforming banking to be more customer-centric and seamless, while also helping modernize legacy systems that most banks still operate on. He sees AI combined with technologies like digital assets and stable coins enabling faster, more intuitive financial services.
EVIDENCE
He gives examples of intuitive shopping experiences where customers receive products they didn’t know they needed, with payments automatically handled. He also mentions that 90% of banks still operate with legacy technologies that need modernization, which AI can help accelerate.
MAJOR DISCUSSION POINT
Future Vision and Transformation
E
Erik Ekudden
6 arguments184 words per minute2305 words750 seconds
Argument 1
Networks must evolve from passive carriers to active enablers, becoming an intelligent fabric that hosts AI workloads at the edge
EXPLANATION
Erik argues that networks are transitioning from simply carrying data to becoming the hosting infrastructure for AI applications. He describes this evolution as creating an ‘intelligent fabric’ where AI inference happens distributed across the network rather than centralized in data centers.
EVIDENCE
He mentions that billions of smartphones already use AI applications across mobile infrastructure, and provides examples of AI glasses requiring real-time language translation and navigation support that cannot be processed on the device itself, requiring network-based AI processing.
MAJOR DISCUSSION POINT
AI Infrastructure and Network Intelligence
AGREED WITH
Divyesh Vithlani
Argument 2
AI applications require distributed inference capabilities across networks to support emerging technologies like AI glasses and industrial applications
EXPLANATION
Erik explains that the future of AI involves moving from centralized training in data centers to distributed inference across networks to support applications like AI glasses, industrial AI in agriculture, hospitals, and manufacturing that require low latency and real-time processing.
EVIDENCE
He demonstrates AI glasses that provide navigation support, real-time language translation, and presentation prompts, explaining that these applications cannot run on the wearable devices themselves and require offloading AI inference to the network edge.
MAJOR DISCUSSION POINT
AI Infrastructure and Network Intelligence
Argument 3
Accountability in AI systems requires clear responsibility at each domain level, with existing safety and security principles from critical infrastructure translating to the agentic world
EXPLANATION
Erik argues that as AI agents replace human work, accountability must be clearly defined at each level of the technology stack, with responsibility residing in each domain that provides services to customers or employees. He believes existing telecom safety and security principles should carry over to AI systems.
EVIDENCE
He describes a hierarchy of decision-making with super advanced agents at the top and more fine-grained agents with less knowledge making guard-railed decisions below. He references telecom’s existing critical infrastructure guardrails that people’s lives depend on.
MAJOR DISCUSSION POINT
Governance and Accountability in AI Systems
AGREED WITH
Divyesh Vithlani
DISAGREED WITH
Paul Hubbard
Argument 4
AI networks enable both cost savings and new business growth opportunities through tailored services for different applications
EXPLANATION
Erik explains that while AI in networks can provide efficiency gains and cost savings, the real excitement comes from enabling new business models through tailored services that meet specific requirements for different devices, applications, and use cases.
EVIDENCE
He mentions 10-50% efficiency gains worth billions of dollars, but emphasizes that customers get more excited about using AI modeling to produce new outcomes and business growth. He gives examples of mission-critical enterprise services with specific latency and quality requirements.
MAJOR DISCUSSION POINT
Measuring AI Value and ROI
Argument 5
Enterprise risk assessment has become realistic, though government sectors may still overestimate risks and be overly cautious
EXPLANATION
Erik believes that enterprises have developed realistic risk assessments for AI and understand that risks are manageable, but suggests that government sectors may still be overestimating risks and being too cautious, which could hold back progress in public sectors.
EVIDENCE
He acknowledges that risks are very big if you mistreat this powerful technology, but maintains that with proper treatment, the risks are manageable rather than insurmountable.
MAJOR DISCUSSION POINT
AI Risk Assessment and Management
AGREED WITH
Paul Hubbard, Divyesh Vithlani, Hari Shetty
DISAGREED WITH
Paul Hubbard
Argument 6
Future AI-native systems require fundamental rework of products, processes, and go-to-market strategies to be responsive to fast changes
EXPLANATION
Erik argues that becoming AI-native involves more than just building AI-powered products – it requires fundamental changes to how companies build products, manage processes, engage with markets, and handle lifecycle management to be responsive to rapid AI-driven changes.
EVIDENCE
He describes AI-native as requiring data-driven products that learn and update continuously, along with changes to processes, go-to-market approaches, and lifecycle management. He mentions the need for ‘massive user experience’ where everything has unique requirements met in real-time.
MAJOR DISCUSSION POINT
Future Vision and Transformation
H
Hari Shetty
5 arguments199 words per minute1619 words487 seconds
Argument 1
Trust in AI systems, like human trust, must be earned through consistent performance over time without hallucinations or fundamental flaws
EXPLANATION
Hari argues that trust in AI systems cannot be assumed but must be earned through reliable, consistent performance over extended periods. He emphasizes that only systems that work consistently without hallucinations or fundamental flaws can build the trust necessary for widespread adoption.
EVIDENCE
He outlines four principles for proof over promise: starting with problem-first thinking rather than model-first, understanding enterprise complexity, ensuring solutions work consistently every day/hour/minute, and building trust through long-term reliable performance.
MAJOR DISCUSSION POINT
Building Trust and Public Permission for AI
Argument 2
Successful AI scaling requires starting with problem-first thinking rather than model-first approaches, understanding enterprise complexity, and ensuring consistent daily operation
EXPLANATION
Hari advocates for a problem-first approach to AI implementation, emphasizing that enterprises have complex, messy environments with legacy systems and fragmented data that require different solutions than consumer applications. He stresses that solutions must work reliably every day, not just once.
EVIDENCE
He provides an example of analyzing flames in a refinery to extract information about combustion efficiency, fuel-to-air mixture ratios, and equipment maintenance – information superior to sensor-based technology. He also mentions using their own organization as ‘client zero’ to test solutions before offering them to clients.
MAJOR DISCUSSION POINT
Practical AI Implementation and Scaling
AGREED WITH
Paul Hubbard
Argument 3
AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases
EXPLANATION
Hari argues that AI represents a fundamental shift similar to email or internet adoption, where asking for ROI on individual use cases misses the bigger picture. He suggests organizations should view AI as an essential capability rather than optional technology with measurable returns.
EVIDENCE
He compares AI adoption to historical questions about email systems or internet adoption, suggesting these were fundamental shifts that didn’t require traditional ROI justification. He emphasizes that AI is an irreversible journey that organizations must undertake regardless of individual use case ROI.
MAJOR DISCUSSION POINT
Measuring AI Value and ROI
Argument 4
AI risks are manageable rather than uncontrollable, requiring appropriate tools and frameworks but not being overstated
EXPLANATION
Hari takes a balanced view on AI risk, acknowledging that there are legitimate risks that need to be managed in any business context, but arguing that AI risks are not fundamentally different from other business risks and can be managed with proper tools and frameworks.
EVIDENCE
He states that while there is certainly a level of risk to be aware of, the hype about risk is overstated, and with the right kind of toolset that others have discussed, it’s possible to get the best value out of AI without exposing oneself to unmanageable risk.
MAJOR DISCUSSION POINT
AI Risk Assessment and Management
AGREED WITH
Paul Hubbard, Erik Ekudden, Divyesh Vithlani
Argument 5
The fundamental change AI will bring is dramatically increased decision velocity in organizations, making current processes seem intolerably slow
EXPLANATION
Hari predicts that AI’s most significant impact will be dramatically accelerating decision-making processes in organizations. He believes that within a few years, the speed of AI-enabled processes will make current organizational speeds seem unbearably slow, fundamentally changing expectations for business velocity.
EVIDENCE
He emphasizes that slow processes in enterprises lead to poor experiences, and that AI will solve the fundamental problem of organizational velocity, making everything happen at a pace that will make people wonder how they ever tolerated the current slow speed of business processes.
MAJOR DISCUSSION POINT
Future Vision and Transformation
M
Mridu Bhandari
5 arguments133 words per minute1768 words795 seconds
Argument 1
AI development should be guided by the principles of People, Planet and Progress, implemented through seven concrete pillars of aligned global cooperation
EXPLANATION
Mridu introduces a framework for sustainable AI development based on three core principles (People, Planet, Progress) that should guide AI implementation. She outlines seven specific areas or ‘chakras’ that serve as concrete pillars for turning AI ambitions into accountable action.
EVIDENCE
She specifically mentions the seven chakras as: human capital, inclusion, trust, resilience, science, resources and social good as the concrete pillars that will turn ambition into accountability.
MAJOR DISCUSSION POINT
Framework for Sustainable AI Development
Argument 2
The defining challenge of this AI-first decade is achieving trust before skill, outcomes over optics, and responsibility as a competitive advantage
EXPLANATION
Mridu frames the key questions that leaders must address in the current AI era, emphasizing that building trust should precede technical capabilities, that actual results matter more than appearances, and that taking responsibility for AI deployment should be viewed as creating competitive advantage rather than hindering it.
EVIDENCE
She poses these as the defining questions for the panel to address: ‘How can we achieve trust before skill? Outcomes over optics and responsibility as a competitive advantage.’
MAJOR DISCUSSION POINT
Strategic Priorities for AI Leadership
Argument 3
Public permission and transparency are critical requirements for AI deployment, as citizens globally are demanding greater accountability across all sectors
EXPLANATION
Mridu emphasizes that gaining public trust and permission is essential for AI implementation, particularly for social good applications. She notes that this requirement for transparency and accountability extends beyond AI to all areas where citizens interact with institutions and technology.
EVIDENCE
She states that ‘citizens all over the world today are demanding a lot more transparency and accountability when it comes to not just AI, but everything in general.’
MAJOR DISCUSSION POINT
Public Trust and Citizen Engagement in AI
Argument 4
AI systems must demonstrate the transition from promise to proof, with enterprises needing to move beyond perpetual pilots to scalable implementations
EXPLANATION
Mridu highlights the challenge that many enterprises face in moving from AI experimentation and hype to actual productive deployment. She emphasizes the need to distinguish between scalable AI solutions and endless pilot projects that never reach full implementation.
EVIDENCE
She asks about ‘what really separates scalable AI from perpetual pilots that we keep seeing a lot of enterprises deploying’ and emphasizes the importance of ‘proof over promise.’
MAJOR DISCUSSION POINT
AI Implementation and Scaling Challenges
Argument 5
Future AI success will require addressing complex accountability structures across distributed technology stacks with decisions flowing by the second
EXPLANATION
Mridu identifies the challenge of establishing clear accountability in AI systems that involve multiple interconnected components including cloud providers, telecom networks, and enterprises. She emphasizes that these distributed systems make decisions continuously, creating complex responsibility chains.
EVIDENCE
She notes that ‘ecosystems are very, very interdependent today. You have cloud providers, you have the telecom networks, you have enterprises. There are decisions flowing across the distributed stack by the second.’
MAJOR DISCUSSION POINT
Distributed AI System Accountability
Agreements
Agreement Points
Trust is foundational to AI innovation rather than opposing it
Speakers: Paul Hubbard, Divyesh Vithlani
Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are Public trust in AI must be built through conviction at the organizational level and platform-first approaches with built-in safeguards
Both speakers reject the false dichotomy between trust and innovation, arguing that trust actually enables and accelerates AI innovation when properly implemented through people-first approaches and platform-based safeguards
AI risks are manageable with proper frameworks and controls
Speakers: Paul Hubbard, Erik Ekudden, Divyesh Vithlani, Hari Shetty
Risk management shifts from uncertainty about new technology to understanding and managing quantifiable risks through proper guardrails Enterprise risk assessment has become realistic, though government sectors may still overestimate risks and be overly cautious AI risks can be managed by maintaining existing data protection and cloud security controls while tightening guardrails through platform-centric approaches AI risks are manageable rather than uncontrollable, requiring appropriate tools and frameworks but not being overstated
All speakers agree that AI risks, while real, are manageable through proper frameworks, existing security controls, and appropriate guardrails rather than being insurmountable obstacles
Platform-first approaches are essential for scaling AI responsibly
Speakers: Divyesh Vithlani, Erik Ekudden
AI platform architecture should include execution and control planes with guardrails that allow agents to learn and take on more responsibility over time Networks must evolve from passive carriers to active enablers, becoming an intelligent fabric that hosts AI workloads at the edge
Both speakers emphasize the importance of building robust platform infrastructure – whether for banking AI systems or network intelligence – that provides the foundation for safe, scalable AI deployment
AI governance requires clear accountability structures with domain-specific responsibility
Speakers: Erik Ekudden, Divyesh Vithlani
Accountability in AI systems requires clear responsibility at each domain level, with existing safety and security principles from critical infrastructure translating to the agentic world AI governance requires treating agents like employees with performance management, onboarding/offboarding processes, and conflict resolution mechanisms
Both speakers advocate for clear accountability frameworks where responsibility is assigned at appropriate levels, whether through domain-specific responsibility or structured agent management similar to human resources
Problem-first thinking should drive AI implementation over technology-first approaches
Speakers: Paul Hubbard, Hari Shetty
Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are Successful AI scaling requires starting with problem-first thinking rather than model-first approaches, understanding enterprise complexity, and ensuring consistent daily operation
Both speakers emphasize starting with the problems to be solved and the needs of end users rather than beginning with AI models or technology capabilities
Similar Viewpoints
Both speakers view AI as a transformational capability that goes beyond simple productivity gains to fundamental business transformation, though they approach measurement differently
Speakers: Divyesh Vithlani, Hari Shetty
AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases
Both speakers see AI as fundamentally transforming organizational speed and responsiveness, requiring complete rethinking of how businesses operate
Speakers: Erik Ekudden, Hari Shetty
Future AI-native systems require fundamental rework of products, processes, and go-to-market strategies to be responsive to fast changes The fundamental change AI will bring is dramatically increased decision velocity in organizations, making current processes seem intolerably slow
Both speakers emphasize the importance of ensuring AI benefits reach all segments of society, whether through government policy or financial inclusion
Speakers: Paul Hubbard, Divyesh Vithlani
Government accountability involves having clear plans for seizing AI opportunities while spreading benefits broadly and keeping citizens safe AI will transform banking by making financial services more seamless and intuitive while enabling faster system modernization
Unexpected Consensus
AI as fundamental infrastructure rather than optional technology
Speakers: Erik Ekudden, Hari Shetty, Divyesh Vithlani
AI applications require distributed inference capabilities across networks to support emerging technologies like AI glasses and industrial applications AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change
Despite coming from different sectors (telecom, consulting, banking), all three speakers converged on viewing AI as essential infrastructure rather than optional technology, suggesting a maturation in thinking about AI’s role
Human-AI collaboration models based on existing organizational structures
Speakers: Divyesh Vithlani, Erik Ekudden
AI governance requires treating agents like employees with performance management, onboarding/offboarding processes, and conflict resolution mechanisms Accountability in AI systems requires clear responsibility at each domain level, with existing safety and security principles from critical infrastructure translating to the agentic world
Both speakers independently arrived at the idea that AI systems should be managed using familiar organizational and infrastructure management principles, suggesting practical governance approaches are emerging
Trust as a competitive advantage rather than compliance burden
Speakers: Paul Hubbard, Divyesh Vithlani, Hari Shetty
Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets citizens where they are Public trust in AI must be built through conviction at the organizational level and platform-first approaches with built-in safeguards Trust in AI systems, like human trust, must be earned through consistent performance over time without hallucinations or fundamental flaws
All speakers reframed trust from a regulatory compliance issue to a strategic business advantage, which is unexpected given typical discussions that position trust and innovation as competing priorities
Overall Assessment

The speakers demonstrated remarkable consensus across multiple dimensions: viewing AI as foundational infrastructure, emphasizing trust as enabling rather than hindering innovation, advocating for problem-first approaches, and agreeing that risks are manageable through proper frameworks. They converged on practical governance approaches using familiar organizational structures and consistently emphasized the importance of inclusive benefits distribution.

High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI governance discussion has matured beyond basic debates about whether to adopt AI toward more sophisticated questions about how to implement it responsibly and effectively. The cross-sector agreement (government, telecom, banking, consulting) indicates these principles may be broadly applicable across industries and contexts.

Differences
Different Viewpoints
Government vs. Enterprise Risk Assessment Approaches
Speakers: Erik Ekudden, Paul Hubbard
Enterprise risk assessment has become realistic, though government sectors may still overestimate risks and be overly cautious Risk management shifts from uncertainty about new technology to understanding and managing quantifiable risks through proper guardrails
Erik suggests governments are being overly cautious and overestimating AI risks, which could hold back progress in public sectors, while Paul defends the government approach as necessarily cautious initially but evolving toward active risk management as understanding grows
Regulation Timing and Approach
Speakers: Erik Ekudden, Paul Hubbard
Accountability in AI systems requires clear responsibility at each domain level, with existing safety and security principles from critical infrastructure translating to the agentic world Government accountability involves having clear plans for seizing AI opportunities while spreading benefits broadly and keeping citizens safe
Erik warns against regulating before innovating and advocates for translating existing telecom guardrails to AI, while Paul emphasizes the need for comprehensive government planning and whole-of-society leadership in AI governance
Unexpected Differences
Fundamental Nature of AI Technology
Speakers: Divyesh Vithlani, Hari Shetty
AI risks can be managed by maintaining existing data protection and cloud security controls while tightening guardrails through platform-centric approaches AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases
Divyesh argues that AI is not actually new technology but predates cloud and mobile technologies, suggesting continuity with existing approaches, while Hari treats AI as a fundamental shift comparable to email or internet adoption. This creates tension between evolutionary vs. revolutionary framing of AI’s impact
Overall Assessment

The discussion reveals relatively low levels of direct disagreement, with most tensions arising around implementation approaches rather than fundamental principles. Key areas of difference include government vs. private sector risk tolerance, regulation timing, and whether AI represents evolutionary or revolutionary change

Low to moderate disagreement level. The speakers largely align on core principles of trust, responsibility, and the transformative potential of AI, but differ on tactical approaches to governance, risk management, and implementation strategies. These disagreements reflect healthy debate about best practices rather than fundamental philosophical divisions, suggesting good potential for collaborative solutions

Partial Agreements
Both agree that AI represents a fundamental transformation beyond simple ROI calculations, but Hari advocates for viewing AI as a capability like email or internet (not requiring ROI justification), while Divyesh provides a structured three-tier framework for measuring and demonstrating AI value to stakeholders
Speakers: Hari Shetty, Divyesh Vithlani
AI should be viewed as a fundamental capability and irreversible journey rather than just measuring ROI for individual use cases AI value creation occurs at three levels: micro-level productivity through co-pilots, enterprise-level process transformation, and competitive advantage through faster response to change
Both agree on the need for distributed, intelligent infrastructure, but Erik focuses on network-level intelligence and edge computing capabilities, while Divyesh emphasizes platform-level governance and control mechanisms for managing AI agents
Speakers: Erik Ekudden, Divyesh Vithlani
AI applications require distributed inference capabilities across networks to support emerging technologies like AI glasses and industrial applications AI platform architecture should include execution and control planes with guardrails that allow agents to learn and take on more responsibility over time
Takeaways
Key takeaways
Trust is foundational to AI innovation rather than a barrier – it enables rather than hinders progress and must be built through people-first approaches AI infrastructure must evolve from passive carriers to intelligent fabrics that actively enable AI workloads at the edge through networks like 5G/6G AI governance should treat agents like employees with performance management, guardrails, and accountability structures built into platform architectures AI value should be measured beyond ROI – it represents a fundamental capability shift requiring assessment of productivity, business outcomes, and competitive advantage Successful AI scaling requires problem-first thinking, understanding enterprise complexity, and ensuring consistent operation rather than perpetual pilots AI risks are manageable through proper frameworks and existing security controls, though government sectors may be overly cautious while enterprises have realistic assessments Future AI-native organizations will require fundamental rework of products, processes, and business models to be responsive to rapid changes Cross-sector collaboration between government, private sector, and academia is essential for responsible AI development and public good AI will dramatically increase decision velocity in organizations, making current slow processes seem intolerable in the future
Resolutions and action items
Organizations should adopt platform-first approaches with built-in ethical AI and governance controls Governments should focus on clear communication plans that demonstrate AI benefits while keeping citizens safe Enterprises should move beyond measuring individual use case ROI to viewing AI as fundamental capability investment Leaders should implement three-step scaling plans: clear AI vision, focus on operating model transformation rather than task automation, and structural organizational changes Networks must be designed with energy-efficient hardware and software to support distributed AI inference at scale AI governance frameworks should include agent onboarding/offboarding, performance management, and conflict resolution mechanisms
Unresolved issues
How to achieve global-scale AI diffusion and inclusion within a 4-year timeframe remains uncertain Balancing AI innovation speed with appropriate regulatory oversight without stifling development Reconciling massive infrastructure and energy demands of AI with sustainability goals Determining optimal risk tolerance levels across different industries and use cases Addressing the digital divide to ensure AI benefits reach marginalized communities and rural areas Managing the transition period as job roles transform and new positions emerge that don’t yet exist Establishing international standards for AI accountability across distributed, multi-vendor technology stacks
Suggested compromises
Accept that AI risks are manageable rather than seeking zero-risk approaches that could stifle innovation Use existing regulatory frameworks from related technologies (data protection, cloud security) as starting points rather than creating entirely new governance structures Allow different risk tolerance levels (85% vs 99.99% accuracy) based on specific use cases and industry requirements Balance centralized AI training with distributed inference to optimize both performance and energy efficiency Implement graduated autonomy for AI agents similar to human employee development – starting with limited responsibility and increasing over time Focus regulation on outcomes and accountability rather than prescriptive technical requirements that could limit innovation Combine public and private sector expertise through collaborative initiatives rather than siloed development approaches
Thought Provoking Comments
It’s not either or. It’s not about you have trust or you have productive AI… there is no compromise on risks and controls. Our business in banking relies 100% on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction.
This comment reframes the entire trust vs. innovation debate by rejecting the false dichotomy. It establishes that trust isn’t a barrier to AI adoption but rather a foundational requirement, especially in regulated industries. The emphasis on ‘conviction’ as a starting point is particularly insightful as it suggests organizational commitment precedes technical implementation.
This comment shifted the discussion from viewing trust and innovation as competing priorities to understanding them as complementary necessities. It influenced subsequent speakers to discuss governance frameworks and platform approaches rather than trade-offs, fundamentally changing how the panel approached AI implementation strategies.
Speaker: Divyesh Vithlani
AI is no longer about pilots. It’s about being able to get value out of AI… don’t start with a model, don’t talk about model x or model y and then start with a model first thinking, start with a problem first thinking.
This insight challenges the prevalent technology-first approach in AI adoption. By advocating for problem-first thinking, it addresses a critical issue where organizations get caught up in AI capabilities rather than focusing on actual business problems that need solving. This represents a maturation in AI thinking from experimentation to practical application.
This comment redirected the conversation toward practical value creation and moved the discussion away from theoretical AI capabilities to concrete business outcomes. It prompted other panelists to share specific examples of successful AI implementations and influenced the later discussion about ROI measurement and business innovation.
Speaker: Hari Shetty
I view an agent no different to a human so you do performance management… there’s a concept that we call agent university… whilst humans may not fill out a timesheet to account for the work that they’ve done… we’re also monitoring the agent for the tokens that they’ve consumed for the output that they’ve generated.
This anthropomorphic approach to AI governance is remarkably innovative, treating AI agents with human-like management frameworks including performance reviews, training, and accountability measures. The ‘agent university’ concept and the parallel between human timesheets and token consumption monitoring represents a novel governance model that makes AI management more relatable and systematic.
This comment introduced a completely new framework for thinking about AI governance that resonated throughout the remainder of the discussion. It influenced questions about agent accountability, performance measurement, and even prompted a humorous exchange about ‘firing agents for hallucinating,’ while establishing a practical model for AI management that other organizations could adopt.
Speaker: Divyesh Vithlani
We’re moving from that big data center training to the distributed inference… That means that you need to scale it to like 8 billion inference for glasses. Tens of billions of sensors using AI… So we’re not going to explode energy consumption just because we use more AI.
This technical insight challenges the prevailing narrative about AI’s unsustainable energy consumption by distinguishing between energy-intensive training phases and more efficient distributed inference. It provides a nuanced view of AI’s environmental impact and suggests a more sustainable path forward through architectural changes.
This comment reframed the sustainability discussion from doom-and-gloom predictions to a more optimistic and technically grounded perspective. It influenced the conversation to focus on practical solutions for sustainable AI deployment and helped balance concerns about AI’s environmental impact with realistic projections about future efficiency improvements.
Speaker: Erik Ekudden
AI is beyond just measuring return on investments… it’s almost like going back in time could you ask should we implement an email system what’s the ROI on the email system… so a lot of the thinking should change from looking at ROI to looking at AI as a fundamental capability and a fundamental shift and a journey which is irreversible.
This analogy brilliantly contextualizes AI adoption by comparing it to foundational technologies like email and the internet. It challenges the conventional business approach of demanding immediate ROI for transformational technologies and suggests that AI should be viewed as an inevitable capability rather than an optional investment.
This comment fundamentally shifted how the panel discussed AI value measurement, moving from traditional ROI metrics to broader capability building. It influenced the subsequent discussion about long-term competitiveness and helped establish AI as a strategic imperative rather than a tactical tool, affecting how other panelists framed their responses about organizational AI adoption.
Speaker: Hari Shetty
If you regulate before you have innovated, you never know what you will get. But I think if you stay with these basic principles that we do have requirements and we have guardrails in the world we’re coming from, and you translate that more or less one-to-one into the agentic world, I think we are on a good starting point.
This comment provides a nuanced approach to AI regulation that balances innovation with safety. Rather than calling for new regulatory frameworks, it suggests adapting existing proven governance models from established industries like telecommunications. This perspective offers a practical path forward that avoids regulatory paralysis while maintaining necessary safeguards.
This insight influenced the discussion about accountability and governance by providing a middle ground between over-regulation and under-regulation. It helped shape the conversation toward practical governance approaches and influenced other panelists to discuss how existing regulatory frameworks in their industries could be adapted for AI, rather than starting from scratch.
Speaker: Erik Ekudden
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional wisdom and introducing innovative frameworks for AI adoption. The conversation evolved from abstract concepts about AI trust and innovation to concrete, actionable approaches for implementation. Vithlani’s reframing of trust as foundational rather than optional, combined with Shetty’s problem-first approach, established a mature perspective on AI adoption that moved beyond pilot projects to scalable solutions. The anthropomorphic governance model and the sustainability reframing provided practical solutions to common AI concerns, while the capability-versus-ROI perspective and regulation-innovation balance offered strategic guidance for long-term AI adoption. Together, these insights transformed what could have been a typical AI hype discussion into a substantive conversation about practical, responsible AI implementation at scale.

Follow-up Questions
How do we measure the ROI of AI beyond traditional productivity metrics?
This question was raised multiple times throughout the discussion as enterprises struggle to quantify AI value beyond cost savings, with participants suggesting various approaches but no definitive framework emerging
Speaker: Mridu Bhandari
What governance models work best for agentic AI systems in regulated industries?
As AI agents become more autonomous, there’s a need to understand how to adapt existing governance frameworks, particularly in banking and other regulated sectors
Speaker: Divyesh Vithlani
How do we scale AI infrastructure sustainably while meeting massive compute and energy demands?
The tension between AI’s energy requirements and sustainability goals requires further research into energy-efficient hardware, software, and deployment models
Speaker: Mridu Bhandari
What are the specific technical requirements for AI-native networks to support distributed inference at scale?
The transition from centralized training to distributed inference across billions of devices requires detailed technical specifications and infrastructure planning
Speaker: Erik Ekudden
How do we measure and manage agent performance, conflicts, and accountability in real-world deployments?
As organizations deploy AI agents at scale, they need practical frameworks for performance management, conflict resolution between agents and humans, and clear accountability structures
Speaker: Divyesh Vithlani
What distinguishes AI-native nations from AI-dependent nations in terms of long-term competitiveness?
Understanding the fundamental differences in approach, capability, and outcomes between nations that build AI capabilities versus those that merely consume AI services
Speaker: Paul Hubbard
How do we ensure AI diffusion and inclusion reaches all segments of society, particularly in diverse countries like India?
The challenge of making AI benefits accessible across different socioeconomic groups, rural areas, and marginalized communities requires targeted research and policy approaches
Speaker: Erik Ekudden
What are the practical frameworks for building public trust in AI while maintaining innovation speed?
Balancing the need for public confidence and democratic participation with the pace of technological advancement requires refined approaches to stakeholder engagement
Speaker: Paul Hubbard
How do we transition from AI pilots to scalable, production-ready solutions that work consistently?
Many organizations struggle to move beyond proof-of-concept projects to enterprise-scale AI implementations that deliver reliable business value
Speaker: Hari Shetty

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Trusted Connections_ Ethical AI in Telecom & 6G Networks

Trusted Connections_ Ethical AI in Telecom & 6G Networks

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the integration of artificial intelligence in telecommunications networks, held as part of India’s AI Impact Summit 2026 and organized by the Telecom Regulatory Authority of India (TRAI). TRAI Chairman Anil Kumar Lahoti opened the session by emphasizing that AI is no longer an add-on to telecommunications but has become a foundational capability, particularly crucial given India’s scale of over 1.3 billion telecom subscribers. He highlighted that AI is already delivering tangible benefits, including significant energy savings and the blocking of nearly 400 million spam calls daily through AI-powered filtering systems.


The panel discussion featured industry experts from major telecommunications equipment manufacturers including Ericsson, Qualcomm, Nokia, and Tejas Networks. Magnus Ewerbring from Ericsson emphasized India’s advantageous position with over 90% 5G population coverage, noting that networks already use AI for optimization but are moving toward fully autonomous operations by 2028. Dr. Vinesh Sukumar from Qualcomm discussed the democratization of AI through edge computing on personal devices, highlighting the importance of hybrid AI systems that balance cloud and edge processing.


Pasi Toivanen from Nokia stressed the critical importance of building collaborative ecosystems among technology players, regulators, and government agencies to fully capture AI’s potential while addressing security risks. Shantaram Jagannath from Tejas Networks presented a framework for AI adoption based on cost optimization and revenue generation, suggesting that telecom networks could become platforms for AI services similar to app stores. The discussion addressed key challenges including maintaining service equity between urban and rural areas, managing the transition from current networks to AI-native systems, and preparing for a future where AI agents may outnumber human users on networks. The consensus emerged that successful AI integration in telecommunications requires end-to-end ecosystem thinking, robust regulatory frameworks, and collaborative approaches to ensure both innovation and public trust.


Keypoints

Major Discussion Points:

AI-Native Network Evolution: The transition from traditional telecom networks to AI-native 6G networks, where AI will be intrinsic rather than an add-on application layer, requiring networks to become fully autonomous and self-healing


Responsible AI Implementation and Governance: The need for risk-based regulatory frameworks, transparency, accountability, and consumer protection as AI systems in telecom can affect millions of users simultaneously, with emphasis on maintaining public trust


Edge vs. Cloud AI Processing: The strategic decisions around hybrid AI architectures, determining which AI functions should run on network edge devices versus centralized cloud systems, balancing performance, privacy, and efficiency


Ecosystem Collaboration and Standards: The importance of building comprehensive ecosystems involving telecom operators, technology companies, regulators, and government agencies to address AI challenges collectively rather than in isolation


Scalability and Infrastructure Challenges: Managing the transition of India’s massive telecom infrastructure (1.3 billion subscribers) to AI-enabled systems while ensuring equitable access across urban and rural areas and preparing for exponential growth in AI agents


Overall Purpose:

The discussion aimed to explore how telecom networks must evolve to support the AI era, focusing on technical architecture, regulatory frameworks, and responsible implementation strategies. The session was part of India’s AI Impact Summit 2026, specifically examining how to prepare telecom infrastructure for AI-native operations while maintaining security, sustainability, and equitable access.


Overall Tone:

The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence in India’s position as a leader in AI-telecom convergence, citing the country’s extensive 5G coverage and digital infrastructure. The tone was collaborative and solution-oriented, with industry experts emphasizing partnership and ecosystem thinking. While acknowledging significant technical and regulatory challenges, the overall sentiment remained positive about the transformative potential of AI in telecommunications, with speakers viewing obstacles as manageable through proper planning and cooperation.


Speakers

Speakers from the provided list:


Ms. Pallavi Mishra – Event moderator/host organizing the discussion on AI and telecommunication at India AI Impact Summit 2026


Shri Anil Kumar Lahoti – Honorable Chairman, Telecom Regulatory Authority of India (TRAI), telecom regulatory expert with dynamic leadership in the telecom regulatory ecosystem


Shri Ritu Ranjan Mittar – Member TRAI, telecom policy expert with over three decades of experience in telecom networks, global standards, and spectrum policy, session moderator


Magnus Ewerbring – Chief Technology Officer for Asia Pacific at Ericsson, global telecom innovator involved in developing region’s long-term technology vision from 5G deployment to 6G readiness


Dr. Vinesh Sukumar – Vice President, Product Management at Qualcomm, seasoned product leader with over 20 years of experience in large-scale AI, deep learning, and mobile technologies across global telecom ecosystem


Mr Pasi Toivanen – Representative from Nokia, leads strategic engagement with governments and industry on AI and connectivity initiatives, driving large-scale ecosystem collaboration in cloud, AI and AI RAN


Mr. Shantigram Jagannath – Representative from Tejas Networks, technology strategist leading wireless products, network management system, and AI-driven innovations


Audience – Participant who asked a question during the Q&A session


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on artificial intelligence integration in telecommunications networks took place during India’s AI Impact Summit, organised by the Telecom Regulatory Authority of India (TRAI) in collaboration with India AI under the Ministry of Electronics and IT. The session was held on February 20th, marking TRAI’s 29th anniversary of shaping India’s telecommunications landscape, and brought together representatives from telecom operators, technology original equipment manufacturers (OEMs), policymakers, government officials, academia, and media to address the transformative convergence of AI and telecommunications.


Foundational Shift: From AI Add-On to AI-Native Networks

TRAI Chairman Anil Kumar Lahoti opened the session with a paradigm-shifting perspective that fundamentally reframed the relationship between AI and telecommunications. Rather than viewing AI as merely an application layer or add-on service, Lahoti emphasised that artificial intelligence has become a foundational capability intrinsic to network operations. He articulated a vision where upcoming 6G technology will be inherently AI-native, transforming telecom networks from simple data carriers into central pillars of India’s AI infrastructure.


This foundational shift carries profound implications for network design, operation, and user experience. Lahoti described networks that can self-heal, detect faults proactively before users experience problems, and deliver seamless connectivity to billions without interruption—capabilities that move beyond science fiction into practical reality. The Chairman positioned India’s nationwide fibre backbones and mobile broadband networks as constituting one of the world’s most widely distributed digital infrastructures, operating within mature operational and regulatory frameworks that provide a solid foundation for AI integration.


India’s Strategic Advantage and Current AI Benefits

The discussion highlighted India’s unique position in the global AI-telecommunications convergence, with the country operating telecom networks at unprecedented scale. With over 1.3 billion telecom subscribers and more than 1 billion data users, India represents a testing ground where AI-driven automation transitions from optional enhancement to indispensable necessity. This scale advantage positions India to lead global innovation in AI-native network operations.


Current AI implementations are already delivering tangible benefits across multiple dimensions. Operators report significant energy savings through AI optimisation, whilst AI and blockchain-based filtering systems now flag or block nearly 400 million suspected spam calls or messages daily. Enhanced enforcement capabilities have led to the disconnection of approximately 2.1 million spam numbers, demonstrating AI’s effectiveness in combating fraud and improving consumer safety. Additionally, TRAI is advancing the rollout of a digital consent acquisition framework following successful pilot runs with banks, ensuring consumers maintain digital control over consent for commercial communications.


Industry Expert Perspectives on Network Evolution

The panel discussion featured distinguished experts from major telecommunications equipment manufacturers, each offering unique insights into AI integration challenges and opportunities. Magnus Ewerbring from Ericsson emphasised India’s advantageous position with over 90% 5G population coverage, noting that whilst networks already utilise AI for optimisation, the industry is progressing toward fully autonomous operations. He outlined specific achievements his company has observed, including 10% capacity optimisation in link adaptation and 33% energy efficiency improvements in network operations.


Ewerbring described the industry’s journey toward level 4 autonomy by 2028, as defined by TM Forum standards, with 6G networks targeting fully autonomous level 5 operations. This progression represents a systematic evolution from current semi-automated systems to networks that can operate independently with minimal human intervention, fundamentally changing how telecommunications infrastructure is managed and maintained.


Dr. Vinesh Sukumar from Qualcomm focused on the democratisation of AI through edge computing, emphasising the importance of bringing AI inference capabilities directly to personal devices including phones, laptops, smart watches, and smart glasses. He highlighted the critical challenge of developing hybrid AI systems that intelligently balance processing between edge devices and cloud infrastructure. This hybridisation concept addresses the fundamental question of which AI functions should run locally on devices for privacy and responsiveness versus which should leverage cloud resources for complex processing and fleet management.


Sukumar identified key performance indicators that guide edge versus cloud decisions: data privacy, user privacy, responsiveness, and predictable end-to-end performance favour edge processing, whilst fleet management, AI/ML training, and complex scenario handling benefit from cloud infrastructure. However, he acknowledged that current routing decisions remain largely static, with predetermined workloads assigned to specific processing locations, whilst the ultimate goal involves dynamic, intelligent routing that can adapt based on conversation context and user needs.


Ecosystem Collaboration and Platform Innovation

Pasi Toivanen from Nokia introduced a crucial perspective on ecosystem collaboration, arguing that successful AI implementation requires unprecedented cooperation between technology players, regulators, and government agencies. He challenged the traditional competitive approach in telecommunications, asserting that no single entity, regardless of expertise, can fully capture the 360-degree complexity of AI implementation independently. This ecosystem thinking emphasises transparent value creation and distribution amongst collaborative partners.


Toivanen stressed that security risks in AI-enabled networks will be fundamentally different from traditional threats, requiring end-to-end thinking and collaborative approaches to address vulnerabilities effectively. He advocated for pushing decision-making capabilities to the network level wherever possible, minimising reliance on distant data centres to reduce inefficiency and complexity whilst maintaining robust security postures.


Shantaram Jagannath from Tejas Networks presented a comprehensive framework for AI adoption based on cost optimisation and revenue generation strategies. He outlined approaches for both capital expenditure (CAPEX) optimisation through architectural choices and operational expenditure (OPEX) optimisation through enhanced operational efficiency. For revenue generation, he identified product enhancement opportunities that improve network efficiency alongside innovative AI-as-a-service models.


Jagannath introduced a revolutionary concept of transforming telecom networks into AI platforms similar to app store models, where developers can upload AI applications that become accessible to all network users. This platform approach particularly addresses India’s “bottom of the pyramid” population, enabling democratised access to AI services through telecommunications infrastructure. He projected a fundamental shift from current human-centric networks to future networks potentially handling significantly more AI agents, requiring comprehensive rethinking of business models, pricing frameworks, and regulatory approaches.


Trust-Centric Governance and Regulatory Framework

Chairman Lahoti emphasised that trust must remain the central pillar of AI adoption in telecommunications, given that automated algorithmic decisions can simultaneously affect millions of users. This scale amplification makes transparency, accountability, and consumer rights protection non-negotiable requirements rather than optional considerations. He referenced the MANOV vision announced by the Honorable Prime Minister of India, which emphasises a human-centric framework for ethical, accountable, and inclusive AI governance.


TRAI has proactively addressed these challenges through a risk-based regulatory framework for AI in telecommunications, recognising that different AI use cases carry varying levels of risk. Low-risk applications may be guided through self-regulation mechanisms, whilst high-risk use cases—especially those directly affecting consumers—require stronger obligations around transparency, explainability, and human oversight. This nuanced approach enables innovation whilst ensuring appropriate safeguards.


In April 2024, TRAI further facilitated responsible innovation through recommendations on regulatory sandboxes, enabling live network testing of AI-enabled solutions within defined safeguards. This approach reflects TRAI’s regulatory philosophy of enabling innovation whilst ensuring public interest protection, providing controlled environments for testing AI applications relevant to 5G and future 6G networks.


The regulatory framework aligns with the Government of India’s broader AI governance approach, including the India AI mission and recently articulated AI governance guidelines. These principles prove particularly relevant for telecommunications, where AI systems interact continuously with citizens, enterprises, and public institutions.


Technical Architecture and Implementation Challenges

The discussion revealed significant technical challenges in implementing AI across India’s massive existing telecommunications infrastructure. A key debate emerged around optimal decision-making distribution within network architectures. Dr. Sukumar advocated for edge processing to handle privacy-sensitive operations and latency-critical applications, whilst Mr. Toivanen argued for concentrating most decisions at the network level to avoid inefficiency and complexity associated with distributed processing.


This architectural debate reflects broader challenges in hybrid AI implementation, where systems must intelligently route workloads between edge devices, network infrastructure, and cloud resources based on real-time context, user requirements, and network conditions. Current implementations largely rely on static routing decisions with predetermined workload assignments, but the industry aspires toward dynamic, intelligent routing capabilities.


The sustainability dimension adds another layer of complexity, as AI’s compute-intensive nature raises concerns about energy consumption and environmental impact. However, the discussion revealed that properly implemented AI can actually improve energy efficiency, with concrete examples of significant energy savings in network operations.


A critical question from the audience addressed the practical challenge of managing 118 crore mobile connections, highlighting the need for sophisticated network management capabilities that can create different network slices for various use cases while ensuring equitable resource distribution across urban and rural areas.


Future Vision and Economic Models

The session addressed practical implementation challenges specific to India’s telecommunications landscape, including the tension between AI-first architectures that require comprehensive infrastructure replacement versus approaches that leverage existing investments whilst adding AI functionality.


The discussion also explored future network usage patterns, with projections suggesting that AI agents could significantly outnumber human users in coming years. This transition from human-centric to AI agent-dominated networks raises questions about pricing models, regulatory oversight, and value distribution across the AI ecosystem.


The platform model concept introduces additional complexity, as telecom networks would need to balance their traditional role as connectivity providers with new responsibilities as AI service platforms. This transformation requires developing technical capabilities for dynamic application deployment and establishing governance mechanisms for platform operations.


International Cooperation and Standards

The discussion acknowledged that AI-driven telecom operations increasingly operate across borders, making interoperability, standards development, and ethical alignment global concerns. India’s experience deploying AI in telecommunications at population scale offers valuable lessons for international cooperation, whilst shared challenges require collaborative solutions.


The progression toward 6G networks necessitates international coordination on standards development, particularly for AI-native network architectures and autonomous operation capabilities. This coordination must address technical interoperability, security frameworks, and ethical guidelines that enable seamless international connectivity.


Conclusion and Path Forward

The session concluded with strong consensus that AI will undoubtedly shape the future of telecommunications, but the manner of design, governance, and deployment will determine whether this future proves trusted, inclusive, and resilient. The discussion successfully bridged technical capabilities with policy implications, business model innovation, and societal impact considerations.


TRAI’s commitment to working with all stakeholders—industry, policymakers, and international partners—ensures that AI in telecommunications serves both innovation and public good. The regulatory authority’s approach of enabling innovation whilst maintaining public interest protection provides a framework for responsible AI deployment at scale.


The conversation revealed that successful AI integration in telecommunications requires more than technical implementation; it demands ecosystem thinking, collaborative governance, and careful attention to equity and sustainability concerns. India’s unique position—with extensive 5G coverage, massive user base, and proactive regulatory framework—positions the country to lead global innovation in AI-native telecommunications whilst serving as a model for responsible AI deployment.


This first plenary session established the foundation for deeper discussions, with a second session planned to focus specifically on “building customer trust through AI-driven operations,” reflecting the continued emphasis on trust as the cornerstone of AI adoption in telecommunications infrastructure.


Session transcriptComplete transcript of the session
Ms. Pallavi Mishra

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. being organized on the sidelines of India AI Impact Summit 2026. Today, we are gathered to discuss the new elements of AI and telecommunication. This event is organized by Telecom Regulatory Authority of India, TRI, in collaboration with India AI under Ministry of Electronics and IT. Today, interestingly, on 20th February, TRI marks 29 years of its journey in shaping India’s telecommunication landscape. Representatives from telecom operators, technology OEMs, policymakers, government, academia and media are present here. Aap sabhi ka hardik abhinandan hai. Kalpna kijiye ek aisa telecom network jo khud ko heal kar sakhe that can detect faults even before we know them and deliver seamless connectivity to billions without interruption.

This is not a science fiction. This is the power of AI in telecommunication. Today, AI is transforming industries. And as we look ahead, AI is all set to become even more transformative. From predictive network management to intelligent customer experiences, the possibilities are humongous. Now, it’s my proud privilege to invite Shri Anil Kumar Lahoti ji, Honorable Chairman TRAI, whose dynamic leadership continues to provide direction and strength to the telecom regulatory ecosystem of our country. Chairman sir needs no introduction. His vision has been instrumental in steering TRAI through a rapidly evolving digital landscape. I respectfully request Chairman sir to kindly deliver his inaugural address.

Shri Anil Kumar Lahoti

Distinguished leaders from the technology companies, from telecom service providers and industry associations, representatives from government, my colleague members from TRI and other colleagues from TRI, ladies and gentlemen. Good afternoon to all of you. It’s my privilege to welcome all of you to this session on Responsible AI in Telecom. This is a session by the side of India AI Impact Summit. During last few days, we have been listening to the world leaders from governments, technology companies, academia and civil society. AI is now and here. In this context, the very composition of this gathering reflects a shared responsibility of recognition. that artificial intelligence is no longer an emerging add -on to telecommunication. It’s a foundational capability shaping how networks are designed, operated and experienced by users.

Artificial intelligence and telecommunications complement each other to form the backbone for the intelligence era. Telecom networks are emerging as the primary carriers of AI, while AI itself is becoming the intelligence layer of telecom. In the upcoming 6G technology, AI will no longer be an application layer. It will be intrinsic. The telecom networks will be AI native. In this sense, telecom networks are no longer mere data carriers, but these are central pillar of India’s AI infrastructure. Our nationwide fiber backbones and mobile broadband networks constitute one of the most widely distributed digital infrastructures in the world, operating within mature operational and regulatory frameworks. India’s scale gives special significance to this convergence. With over 1 .3 billion telecom subscribers and over 1 billion data users, India operates telecom networks at a scale, where AI -driven automation is no longer optional.

It is indispensable. AI is already being deployed to optimize network performance, predict faults, improve energy efficiency, enhance customer experience, and combat fraud and spam communications. These deployments demonstrate how AI can improve service quality, resilience, and consumer safety when applied responsibly at the network level. India is already witnessing clear gains from the responsible use of AI in the telecom sector. Operators are stating significant energy saving with use of AI. Due to the effectiveness of AI and blockchain -based filtering operators are now flagging or blocking nearly 400 million suspected spam calls or messages each day. Enhanced enforcement and improved oversight of service providers has already led to the disconnection of about 2 .1 million spam numbers. The authority is also advancing the rollout of a digital consent acquisition framework following successful pilot runs with the banks to ensure consumers have digital control over consent for commercial communications.

At the same time, the scale at which AI systems operate is also increasing. The impact of AI in telecom also amplifies their impact. automated decisions taken by algorithms can affect millions of users simultaneously this makes trust the central pillar of AI adoption in telecommunication efficiency gains cannot come at the cost of transparency, accountability or consumer rights as telecom is an essential service public confidence must remain at the core of AI enabled transformation the government of India has been proactive in addressing this balance the India AI mission and the recently articulated AI governance guidelines emphasize a huge role in the development of AI and the recent articulated AI governance guidelines emphasize a huge role in the development of AI and the recent articulated AI governance guidelines emphasize a huge role in the development of AI emphasize a huge role in the development of AI and the recent articulated AI governance guidelines emphasize a huge role in the development of AI emphasize a huge role in the development of AI emphasize a huge role in the development of AI and the recent articulated AI governance guidelines one that encourages innovation while embedding safeguards by design.

These principles are particularly relevant for telecom, where AI systems interact continuously with the citizen, enterprises and public institutions. TRI has been aligned with this approach. In July 2023, TRI issued recommendations on leveraging artificial intelligence and big data in the telecommunications sector, proposing a risk -based regulatory framework for AI in telecom. This approach recognizes that not all AI use cases are safe, but that all AI use cases carry the same level of risk. While low -risk applications may be guided through self -regulation, high -risk use cases, especially those directly affecting consumers, require stronger obligations around transparency, explainability, and human oversight. In April 2024, TRI further facilitated this approach through its recommendations on the regulatory sandbox, enabling live network testing of AI -enabled solutions, including those relevant for 5G and future 6G networks within defined safeguards.

This reflects our regulatory philosophy of enabling innovation while ensuring that public interest remains protected. The MANOV vision announced yesterday by the Honorable PM of India emphasizes a human -centric framework for ethical, accountable, inclusive AI governance. The principles of the Mana vision are equally fundamental to AI governance in telecommunication. Coming back to the agenda of today’s program, the two plenary sessions, we have planned to capture this responsibility very well. The first session featuring technology developers will focus on preparing telecom networks for the AI era and examine how networks must evolve to become more intelligent, autonomous and resilient, while remaining secure and sustainable. The second session with representatives from telecom service providers and GSMA will address building customer trust through AI -driven operations, highlighting governance, ethics, accountability, and customer protection in an environment where AI -based decisions increasingly shape everyday connectivity.

As AI -driven telecom operations scale across borders, issues of interoperability, standards, and ethical alignment become global concerns. India’s experience of deploying AI in telecom at population scale offers valuable lessons, while international cooperation remains essential. To address shared challenges, let me conclude with this thought. AI will undoubtedly shape the future of telecommunications. But it is the way we design, govern and deploy AI that will determine whether this future is trusted, inclusive and resilient. TRI remains committed to working with all stakeholders, industry, policymakers and international partners to ensure that AI in telecom serves both innovation and public good. I wish this session fruitful deliberations and look forward to the insights that will emerge from today’s discussions. Thank you.

Ms. Pallavi Mishra

Thank you very much, Chairman, sir, for your inspiring address. You have illuminated how regulatory frameworks and policies are evolving AI -driven telecom. Sir, your words make us believe that this transformation is moving forward in a positive way. We are delighted to hear your perspective, sir. Heartful gratitude to all the esteemed speakers and guests. The inaugural session has set a vibrant context for our upcoming discussions. Now our first plenary session will begin. Our first session is on preparing telecom networks for AI era. In this session, our experts will discuss AI adoption in telecom, transparency, security, safety, sustainable AI networks, and embedding responsibility by design. To moderate this insightful discussion, we are honored to invite Sridhar Ranjan Mittal, member TRAI, an eminent telecom policy expert with over three decades of experience in telecom networks, global standards, and spectrum policy.

We are honored to have distinguished panel of industry leaders. I welcome on dies. Our first panelist, Mr. Magnus Eberberg, Chief Technology Officer for Asia Pacific at Ericsson, a global telecom innovator who had played a key role in developing region’s long -term technology vision from 5G deployment to 6G readiness. Joining us next is Dr. Vinesh Sukumar, Vice President, Product Management from Qualcomm, a seasoned product leader with over 20 years of experience in large -scale AI, deep learning, and mobile technologies across global telecom ecosystem. We also welcome Mr. Parsi Tovnen of Nokia, who leads strategic engagement with governments and industry on AI and connectivity initiatives, driving large -scale ecosystem collaboration in cloud. AI and AI RAN. Our next distinguished speaker is Mr.

Shanti Gram Jagannath from Tejas Networks, a technology strategist leading wireless products, network management system, and AI -driven innovations. I request all the panelists to join for a quick photograph session on the demand of the organizers. You all may stand for a moment. Thank you, sirs. We look forward to a thoughtful exchange on how AI -driven innovations are being used in the future. We are redefining telecom capabilities from our panelists. now I hand over the stage to our moderator Sri Rutheranjan Mithrasar to start the session

Shri Ritu Ranjan Mittar

Thank you Madam Sri Lahoti Chairperson TRAI my colleague member Dr. Tangirala doyens of the industry OEMs your staff accompanying you my colleagues from TRAI industry associations young officers representatives of start -ups so it’s a very important session on how the telecom networks will actually evolve to AI we saw a lot of use cases in the last 2 -3 days related to farming education healthcare but today the focus is telecom and the first session, this session is on the network, the next session as you’ve been informed is the subscriber so as a telecom network we are introduced to term access network, so that is one thing we would like to understand how your access network is evolving with the AI, we all know that in the 6G, AI and communication is one of the important use cases one of the 6 cases similarly going to core what kind of changes do you envisage when you implement AI, especially with respect to core, chairperson spoke about the benefits that are already accruing for the network management when the network management is getting into the AI is getting into the network management management.

Another thing I would also request you to dwell on is that ultimately the AI is going to come on the handsets. So once AI is going to come in on the handsets, what kind of a challenge it will throw to the network? Another important aspect is also raised by chairperson is the security. Are are we going to be challenged by the AI being used for attacking the networks? And what kind of steps we intend to take? Another one thing with the AI is it is compute intensive. So the sustainability as also is listed is going to be important. So it will be important to know what kind of steps the OEMs are envisaging to take care of the sustainability part of it.

You will be all are kind of signatories to the or the countries signatories to the sustainable development goals of the UN so this aspect also is very important now without taking too much time I will, everybody wants to listen from the experts here, I will first like to invite Mr. Magnus Everbring from Ericsson to kindly share

Magnus Ewerbring

Thank you very much, it’s a great pleasure to be here now, bottom of my heart I’m very impressed by this event and the messages we’ve heard and I think we just heard at the onset of this very much central message to leverage AI fully for ordinary people, consumers, for industries, for enterprise and government functions. It needs to use compute resources and that we often attribute to the data centers and indeed they will be there. But also we need to connect them. And here I think India comes out being very much in the pole position having a well over 90 % population coverage I’ve heard numbers even up to 99 % population coverage for 5G today. And not only that, but truly well performing networks.

That is a fundamental platform to drive innovation on and to drive innovation on together with AI. And looking then for the next five years to come, that indeed is what we will see. And the nations that in a few years time end of this decade are at the leading edge with the best of what 5G can do connecting the data centers with the AI applications in devices will be at the cutting edge and will have an advantage. And they are the ones having the least step into 6G. And again I think India is just in a supreme pole position to be there. The networks already today use AI. They use that to a large degree but still I argue it’s only the beginning.

We use it as the systems are being configured becoming more and more autonomous. The goal in the industry is still a good challenge but reachable is to reach what we call level 4 in TM Forum. By 2028 many mobile operators aspire to be there. and that’s beautiful, that’s very good but it is a big undertaking AI is being part to reach that level taking the next step I argue is what we really do with 6G, that’s to reach level 5 and be fully autonomous to cut the rope, cut the ties a bit and let it run and that we have in mind now as we gradually set the standard for 6G 6G shall be AI native and that will then be a baseline for bringing on the knowledge we’ve gained in 5G and to take the next step with 6G lastly then, networks for AI AI on the application side and here I argue it’s important to go society wide.

India is building its digital stack in an impressive way. Leverage AI in that. Industries come in, develop applications on top of it. We’ll drive efficiency locally and also will be export possibilities. So leverage the 5G systems with AI and cloud and then drive the way into 6G. Thank you.

Dr. Vinesh Sukumar

Thank you. I got We at Qualcomm have been trying to really democratize AI and then try to see if we can translate AI to be resident on devices. These devices could be personal devices like phones, could be a laptop, could be your smart watches, your smart glasses, anything that you can think of. But doing that kind of AI inference on the edge is not easy, and especially when you want to go towards more personalized inference. I always joke with my colleagues that AI historically lacks common sense. How do you really translate that to something that’s meaningful to the user needs a lot of investment. In India, we’re seeing this significantly change. We’re seeing a lot of players putting a lot of focus, and how do we get it attached to the user and make a more important connection that drives a lot of these experiences.

At the same time, it’s also very critical of how do we look at coexistence between what runs on the cloud and what runs on the edge. It’s what we call a concept of hybridization. The concept of hybridization is the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built Hybrid AI, you know, working with our network operators is also not an easy concept.

It’s always been a challenge to understand, you know, which of these experiences would be transitioned towards the cloud, how do you make that decisions, and what runs on the edge. I think, you know, there’s a lot of research activity happening in this space, and India is definitely leading in this front. I totally expect in the next couple of, you know, months, we would see a strong transition where there’s a, you know, fundamental element of hybrid where you can have both edge and cloud coexist. And last but not

Shri Ritu Ranjan Mittar

Thank you, Mr. Sukumar. I would like to invite Mr. Pasi from Nokia now.

Mr Pasi Toivanen

Namaste. My sincere thanks for the opportunity to be here. It’s exciting. It’s exciting for the sake of this AI Summit, but it’s also very personal for me. I have been coming to the beautiful country of India since 1999, and it has been fascinating to see this country evolving all these years, over these years, in different locations. We were talking earlier today already in how many places I have been experiencing this evolution and development. It’s fascinating. So thanks for the opportunity to be here. It’s very important. I don’t know what to add after such a wonderful keynote by the chairman of the other opening talks. I think it is obvious that AI will also impact telecommunication networks, the network evolution, the business models, the innovation, everything around it.

So instead of going deeper in the technology discussion, because they were very well covered by the previous panelists, industry leaders, I would focus this couple of minutes for how. Not what. But how. How to capture the best of AI era. And for me, it comes around who is able to build the most value -rich, most welcoming, most compelling ecosystems. Who is able to gather like -minded technology players, regulators, government agencies to think together the opportunity, the risks, the challenges, that whole 360 of AI. I don’t believe that any player, even though they would be the smartest people on the planet, are able to… fully capture the 360 by playing alone. Ecosystem it is. How you are able to proactively define the overall value of this AI evolution.

And then transparently and proactively… agree how that value is distributed, how that value is maximized within that ecosystem. It might sound ideological. It might sound a little bit naive, but I’m a firm believer that it is key for success. It’s a key to capture and deliver all the opportunities within this AI era. It is the only way to address the security risks, which are now going to be different than any time earlier. I mean, we are going to have different access network. We are going to have richer number of applications, applications which are transferring more data, which increased amount of data is contributing for the security risks. We have to address that topic together, end to end.

And let’s do it. India can show the direction forward. For whole world. There is a tradition for great. collaboration, great innovation, so let’s do it. Thank you.

Shri Ritu Ranjan Mittar

Thank you so much, Mr. Pasi. I would now like to invite Mr. Shantaram from Tejas.

Mr. Shantigram Jagannath

Yes. Hi, good afternoon, everyone. Am I audible? Not audible? Now it’s okay. So my colleagues and my boss warned me that, you know, if you’re coming number four, then you may not have much to say, so think of something different. So there are a few things that I would like to say. A few things, you know, being from here, born, brought up, everything in India, making stuff for India, so I’ll try to, I’ll try to anchor my… few comments in terms of what we want to do here. I think the fundamental problem that, I don’t know if you happen to read this book by Sir C .K. Prahlad, where he talked about the economy at the bottom of the pyramid.

So we carry with us a responsibility of solving problems for the bottom of the pyramid, primarily. And Indian telecom especially has that additional responsibility of making sure that access is provided to, and at a very low cost, it is provided to pretty much everybody in the country. Now, in this context, if you want to accelerate the adoption of AI, I leave you with a framework of thought. This is what I use to basically figure out what it takes. So you can either be looking at cost, or you can be looking at revenue. Or both. So when telecom operators have to figure out what are they going to invest in, then these are the sort of the two guidelines or markers that they can use.

When it comes to cost, it is optimization of either the CAPEX or optimization of OPEX in a simplistic sense. OPEX would be operational efficiency. There’s a lot of literature which is available. And my friends from the global OEMs are far ahead in implementation of some of these. We in India are chasing it as well. In terms of the hardware, it leaves you with, again, two choices for architecture. Do you find a way to completely do an AI -first, AI -native architecture? Or should we find a path that has a bolt -on capability? Because, you know, you have equipment that’s already in the field, and there are a lot of investments that we are making, you know, very fresh.

unlike some of the more mature networks in the West where the capital cycle has already gone through you know 4g 5g and it’s it’s paid off over 5 10 years already but in India we have made fresh investments even as early as last year now that kind of equipment if it has to be leveraged for the next 10 years how do you deliver AI that is one challenge that we’ll have to solve so there will be a choice you know do you do AI first or do you do a bolt -on if you are looking at the revenue side of the equation again in simplistic sense there are two sort of big buckets there is a product enhancement where the telecom network itself is enhanced in terms of efficiency there’s a lot of work happening in 3gpp, bouth at 6G alliance, A lot of thought is going in in terms of how do you make, how do you bring in efficiency into the product, essentially optimizing the cost per bit, so to speak, and doing it in a way that is a lot smarter than what we did before.

And on the other side, you have a possibility of generating revenue by providing AI through the telecom network, which Pasi also kind of referred to. So that I see as a possibility of a two -way business model, which is an opportunity that’s available, where on one side you have all the users, you have communities, you have people in our villages, the farmers, etc., who are all on one side of it. And on the other side, you have startups, companies that are building models, companies that are building specific agentic applications. and in between you have the telecom player. And there’s a possibility if we do the frameworks right, and what do I mean by frameworks? The very top thing in terms of framework is trust.

There has to be trust, there has to be some amount of regulation, there has to be some amount of safety that comes with regulation. And allow people to dynamically upload, and it’s like an app store model. So the telecom network essentially becomes a platform where simple models, easy models can be actually uploaded and made accessible to all the users of the telecom network. It doesn’t have to be only the bottom of the pyramid, but of course the bottom and all the layers above. So I’ll leave you with this thought,

Shri Ritu Ranjan Mittar

So thank you so much Mr. Shantaram for sharing your thoughts. Now for the next few minutes I will first have a round of questions with the panel here and then we will throw the session open for questions from the floor. So Mr. Magnus the first question is that we are talking of optimization with the AI but we always used to say that 4G, 5G networks are self -sorn. The concept of son was already there, self -optimization networks. So what does AI fundamentally change now?

Magnus Ewerbring

Okay, thank you. Well, it’s always a journey. We are… For every move we make, we learn, we get more insights and then we take that and move further on. So in that sense, although we’re done… more and more autonomous parts of the systems in the past. It doesn’t say we can do more in the future. And we should do more, of course. Now, with AI, we get a very powerful tool to analyze a lot of data and to apply that knowledge onto a set of, in the system, some algorithms. And we’ve had some fantastic observations there. One was just the, in a very part that’s been optimized for decades on how we do the link adaptation, how we control the communication between the system and the device, we’ve managed to optimize the capacity by 10%.

So imagine a water… a loading spectrum of 100 megahertz. Then you had the equivalent of 110 megahertz. by applying this optimized algorithm. And that’s the cutting edge of what we can do today. I’m sure we can do improvements tomorrow also on that. How much? I don’t dare to state. In energy efficiency, as has been discussed there also, we, in another part of the system, applied AI analysis and make that part of the ongoing processing. And energy efficiency was optimized by 33%. For that part. And that’s an enormous savings applying that on a pan -India network, of course. So, again, it is about to, wherever you are, how far you’ve been, continue to do research to understand the future potential, and then step -by -step apply that with the knowledge that you have derived.

And you will continue to take steps and climb the ladder. Thank you.

Shri Ritu Ranjan Mittar

so next question to you Mr. Sukumar so in the telecom hardware and software space which decisions should be pushed to the edge and what decisions should be centralized

Dr. Vinesh Sukumar

it’s a great question by the way I think it’s really going to come down to elements of key performance indicators I would suppose if you’re looking at areas where you want to focus more on data privacy user privacy better responsiveness, better data management a predictable end to end performance to a large extent that’s going to be happening on the edge and this could look at experiences on data plane loops L1, L2 user kernel space operations anything to do with PII information or user privacy privacy, all that would be edge resident. And as you go more towards cloud, I would say the emphasis is going to be a lot more around fleet management, anything to do with AI, ML training as such.

There’s a concept which historically on the ML land was called MLOps from, you know, from training data all the way to inferencing and monitoring. We have a new model which looks at if there is areas there where things are breaking because of drift, how do we go fix that? That’s where I think cloud definitely helps. Now at the same time, it’s not very binary equation, which is one is edge, one is cloud. There’s also concepts of hybrid, where it’s going to be coexistence. As I was mentioning before in my talk, is that you have to find ways how edge can complement the cloud. And it could be elements of personalization, where you want to drive cloud.

What’s the edge? and you want to go towards more complex scenarios, then you position towards the cloud. A hybrid is in the very early stage. To a large extent, these days routers are very static in nature, meaning those workloads and experiences are predefined, saying X, Y, Z runs on the edge, you know, and A, B, C runs on the cloud. But the most difficult challenge has been is how do these routers can be intelligent enough when you happen to have a multi -turn conversation at some point of time positioned towards the cloud. I think that is something of a huge research topic, and I’m hoping in the next couple of months we’ll see some interesting results.

Thank you.

Shri Ritu Ranjan Mittar

So the next question for you, Mr. Pasi. So I’m sure a lot of development is already taking with the telecom OEMs. Now, what are the decisions which are taken off -net? What kind of decisions do you expect will be taken off -net? Thank you. and what are the decisions during the operation of the network while using AI you think you would be taking?

Mr Pasi Toivanen

Wow. And we have only this four minutes? Okay, okay. But on a serious note, I think jury is still out for this one. So how we are able to… And again, going back to my earlier comment, how we holistically address the topic, how we go through the overall value and that ecosystem and related functionality. I believe more will be done by the network itself. When we design it correctly, it’s able to do many of the security vulnerability assessments by itself. It’s… alerting upstream? Is it going to the edge or further? Let’s see. But I think it needs to be very intense dialogue between the network and edge. More decisions are traveling further to the regional data centers, more we are contributing to inefficiency and hence also the complexity.

So I would, my planning assumption is that I would push as much of the optimization automation to the network itself and then the limited cases to edge and less and less to the actual regional data center, to put it short.

Shri Ritu Ranjan Mittar

Thank you for that. Shantaram, let me come to the ethical part of it. So let’s say we’re a base station serves urban area and also part of it serves the rural area. So how can we make sure that with the AI at the background the customer in a rural area is not deprived of the bandwidth vis -a -vis with respect to the urban area. So what kind of steps do you, I’m sure you will be looking at those things that the bandwidth is not constrained to an area or to a set of subscribers but what do you what are your thoughts? How can we check these things in a network?

Mr. Shantigram Jagannath

Okay. So in just the easy answer is to buy more equipment. But But that’s a great question. It has always been, you know, this question has been live for many, you know, almost decades. I know we went through a case where, you know, we had net neutrality and those debates also happening. So it’s not different from those types of debates. I think while we look at access to AI, there is access to the central AI, right, which has to go through a backhaul capacity and so on. And obviously there one has to create different types of network slices for different types of use cases. And today’s technology, at least the way that we administer networks, it is quite possible to do that.

And it is possible to do that with single clicks. And with AI and us bringing in operational AI, it is, you know, it can even do it more efficiently. It can do it more efficiently without you having to think too much. you can just say that you know I need to create this kind of a bandwidth for this type of an application and the network management you know the assisted network management can actually go and do that for you. Now this coupled with having a lot of edge access for AI. So you know I’ll give I’ll share an example. In the US they recently launched an application where the telecom network can actually sense your voice metrics and identify you through that.

So imagine that kind of an application here in India where you know your identity etc. can actually be verified by the telecom operator not just by your number or by a digital information but by analog information that you are actually communicating and these types of applications can actually be launched. on the edge of the network so short answer step one try to have a lot of a GI and step to use lot more sophisticated network management capability to clearly separate out different types of

Shri Ritu Ranjan Mittar

well thank you so much we can have a do the note that the times up but still we can have one or two quick questions yes mr.

Audience

good afternoon respected panelists it was a wonderful discussion but as I will try to keep myself short as I was hearing all that, we are having around 118 crores of mobile connections in India, built up over a huge network of say fiber and wireless and lead circuits and everything and once we are trying to introduce AI in that, as we know it’s not AI native, we have to build up in the form of external like apps and all and any single minute of disruption causes a huge resentment and the loss of the you know, all the time and resources, so how do we to actually progress from this 118 crores and we want to feed through the AI what is the vision in front of us that we can really carry forward from here, I will look for any of the panelists, thank you

Mr Pasi Toivanen

Perhaps I can start thanks for it, it’s a wonderful question and sorry if I sound like a broken record but without thinking the network evolution end -to -end, you are not able to address it. So it comes to this ecosystem of players that you are able to model what the change is introducing to the network and optimize the network end -to -end. I think it is the only way. Otherwise, you are going to put patch fixes here and there based on certain application behaviors and you are not able to evolve the whole network.

Shri Ritu Ranjan Mittar

Do you like to also substitute?

Mr. Shantigram Jagannath

So I think essentially what you are saying is that it is a journey. And how do we sort of chart out that journey? There are two, three thoughts on this. I think one is that today we are telecom networks are mostly catering to human users. Of course there are enterprises etc. Three, four years, five years down the road I do expect AI users to be starting to dominate. So the business model and the regulation has to support that kind of an evolution. I think that is step one. We need to sort of think through what is, you know, how do we handle, so today we have 118 crores mobile phones. Five years from now we might actually have 500 crores of AI agents which are doing various things but they are still communicating either with each other or with central data repositories and so on.

And we need to basically figure out how do you charge for it? What is the, what is the economics of this? You know, how is it all going to work? Who is going to pay for all of that activity? So there is a lot of policy thought process that has to go in. On the physical side, we are building more and more and denser and denser fiber optics which are carrying 8 teras, 20 teras and so on and so forth. And in anticipation of something like this. So I don’t know if that answered your question but thank you so much.

Ms. Pallavi Mishra

Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Anil Kumar Lahoti
9 arguments93 words per minute1023 words654 seconds
Argument 1
AI is becoming foundational to telecom networks, not just an add-on, with 6G networks being AI-native
EXPLANATION
The Chairman argues that AI has evolved from being an application layer to becoming intrinsic to telecom networks. In upcoming 6G technology, AI will be built into the core architecture rather than being added on top, making telecom networks AI-native from the ground up.
EVIDENCE
In the upcoming 6G technology, AI will no longer be an application layer. It will be intrinsic. The telecom networks will be AI native.
MAJOR DISCUSSION POINT
AI Integration in Telecommunications Infrastructure
AGREED WITH
Magnus Ewerbring, Ms. Pallavi Mishra
Argument 2
TRAI has implemented risk-based regulatory framework for AI in telecom, with stronger obligations for high-risk applications affecting consumers
EXPLANATION
TRAI has developed a regulatory approach that categorizes AI applications by risk level, applying different regulatory requirements accordingly. Low-risk applications can be self-regulated, while high-risk applications that directly affect consumers require stronger transparency, explainability, and human oversight obligations.
EVIDENCE
In July 2023, TRI issued recommendations on leveraging artificial intelligence and big data in the telecommunications sector, proposing a risk-based regulatory framework for AI in telecom. While low-risk applications may be guided through self-regulation, high-risk use cases, especially those directly affecting consumers, require stronger obligations around transparency, explainability, and human oversight.
MAJOR DISCUSSION POINT
Regulatory Framework and Trust
Argument 3
Trust must remain central to AI adoption, with transparency, accountability, and consumer rights protection as core principles
EXPLANATION
The Chairman emphasizes that as AI systems operate at massive scale affecting millions of users simultaneously, maintaining public trust becomes crucial. Efficiency gains from AI cannot come at the expense of transparency, accountability, or consumer rights, especially since telecom is an essential service.
EVIDENCE
automated decisions taken by algorithms can affect millions of users simultaneously this makes trust the central pillar of AI adoption in telecommunication efficiency gains cannot come at the cost of transparency, accountability or consumer rights as telecom is an essential service public confidence must remain at the core of AI enabled transformation
MAJOR DISCUSSION POINT
Regulatory Framework and Trust
AGREED WITH
Shri Ritu Ranjan Mittar
Argument 4
Regulatory sandbox approach enables live network testing of AI solutions within defined safeguards for innovation while protecting public interest
EXPLANATION
TRAI has implemented a regulatory sandbox framework that allows for real-world testing of AI-enabled solutions on live networks. This approach enables innovation in 5G and future 6G networks while maintaining appropriate safeguards to protect public interest.
EVIDENCE
In April 2024, TRI further facilitated this approach through its recommendations on the regulatory sandbox, enabling live network testing of AI-enabled solutions, including those relevant for 5G and future 6G networks within defined safeguards.
MAJOR DISCUSSION POINT
Regulatory Framework and Trust
Argument 5
AI systems operating at scale can affect millions of users simultaneously, making security and ethical deployment critical
EXPLANATION
The Chairman highlights that the massive scale of telecom networks means that AI-driven automated decisions can impact millions of users at once. This amplifies both the benefits and risks of AI deployment, making security and ethical considerations paramount.
EVIDENCE
automated decisions taken by algorithms can affect millions of users simultaneously this makes trust the central pillar of AI adoption in telecommunication
MAJOR DISCUSSION POINT
Security and Ethical Considerations
Argument 6
AI has already demonstrated benefits in combating fraud, with operators blocking 400 million suspected spam calls/messages daily
EXPLANATION
AI and blockchain-based filtering systems have proven highly effective in identifying and blocking fraudulent communications. This demonstrates the practical security benefits that AI can provide when deployed responsibly at the network level.
EVIDENCE
Due to the effectiveness of AI and blockchain-based filtering operators are now flagging or blocking nearly 400 million suspected spam calls or messages each day.
MAJOR DISCUSSION POINT
Security and Ethical Considerations
Argument 7
AI is already deployed for network performance optimization, fault prediction, energy efficiency, customer experience enhancement, and fraud combat
EXPLANATION
The Chairman outlines the current practical applications of AI in telecom networks, showing that AI deployment is already underway across multiple operational areas. These applications demonstrate how AI can improve service quality, network resilience, and consumer safety when applied responsibly.
EVIDENCE
AI is already being deployed to optimize network performance, predict faults, improve energy efficiency, enhance customer experience, and combat fraud and spam communications.
MAJOR DISCUSSION POINT
Current AI Applications and Benefits
Argument 8
Operators are achieving significant energy savings and have disconnected 2.1 million spam numbers through AI and blockchain-based filtering
EXPLANATION
The Chairman provides concrete evidence of AI’s current benefits in the telecom sector, showing measurable results in both operational efficiency and security. These achievements demonstrate the practical value of responsible AI deployment in telecommunications.
EVIDENCE
Operators are stating significant energy saving with use of AI. Enhanced enforcement and improved oversight of service providers has already led to the disconnection of about 2.1 million spam numbers.
MAJOR DISCUSSION POINT
Current AI Applications and Benefits
AGREED WITH
Magnus Ewerbring
Argument 9
Digital consent acquisition framework is being rolled out following successful pilot runs with banks
EXPLANATION
TRAI is implementing a digital framework that gives consumers control over consent for commercial communications. This initiative, tested successfully with banks, represents a move toward giving users more digital control over their communication preferences.
EVIDENCE
The authority is also advancing the rollout of a digital consent acquisition framework following successful pilot runs with the banks to ensure consumers have digital control over consent for commercial communications.
MAJOR DISCUSSION POINT
Current AI Applications and Benefits
M
Magnus Ewerbring
3 arguments125 words per minute764 words365 seconds
Argument 1
India’s 5G coverage of over 90% population provides a strong platform for AI innovation and positions India advantageously for 6G transition
EXPLANATION
Magnus argues that India’s extensive 5G network coverage, reaching over 90% of the population with well-performing networks, creates an ideal foundation for AI-driven innovation. This infrastructure advantage positions India to be at the cutting edge of AI applications and provides a smooth transition path to 6G technology.
EVIDENCE
India comes out being very much in the pole position having a well over 90% population coverage I’ve heard numbers even up to 99% population coverage for 5G today. And not only that, but truly well performing networks.
MAJOR DISCUSSION POINT
AI Integration in Telecommunications Infrastructure
Argument 2
AI enables significant improvements including 10% capacity optimization in link adaptation and 33% energy efficiency gains in network operations
EXPLANATION
Magnus provides specific examples of AI’s impact on network performance, showing measurable improvements in both capacity and energy efficiency. These concrete results demonstrate the practical benefits of applying AI algorithms to optimize network operations that have been refined over decades.
EVIDENCE
in a very part that’s been optimized for decades on how we do the link adaptation, how we control the communication between the system and the device, we’ve managed to optimize the capacity by 10%. In energy efficiency, as has been discussed there also, we, in another part of the system, applied AI analysis and make that part of the ongoing processing. And energy efficiency was optimized by 33%.
MAJOR DISCUSSION POINT
Network Optimization and Operational Efficiency
AGREED WITH
Shri Anil Kumar Lahoti
DISAGREED WITH
Mr. Shantigram Jagannath
Argument 3
Networks are progressing toward level 4 autonomy by 2028, with 6G targeting fully autonomous level 5 operations
EXPLANATION
Magnus outlines the evolution path for network autonomy, explaining that the industry is working toward TM Forum level 4 autonomy by 2028, with 6G networks aiming for fully autonomous level 5 operations. This represents a progression from current AI-assisted operations to fully autonomous network management.
EVIDENCE
The goal in the industry is still a good challenge but reachable is to reach what we call level 4 in TM Forum. By 2028 many mobile operators aspire to be there. taking the next step I argue is what we really do with 6G, that’s to reach level 5 and be fully autonomous
MAJOR DISCUSSION POINT
Network Optimization and Operational Efficiency
AGREED WITH
Shri Anil Kumar Lahoti, Ms. Pallavi Mishra
D
Dr. Vinesh Sukumar
2 arguments202 words per minute882 words261 seconds
Argument 1
AI democratization requires moving inference to edge devices like phones and laptops, with hybrid AI systems combining edge and cloud capabilities
EXPLANATION
Dr. Sukumar argues that true AI democratization depends on bringing AI inference capabilities directly to personal devices rather than relying solely on cloud processing. He emphasizes the importance of hybrid AI systems that can intelligently distribute processing between edge devices and cloud infrastructure based on specific use cases and requirements.
EVIDENCE
We at Qualcomm have been trying to really democratize AI and then try to see if we can translate AI to be resident on devices. These devices could be personal devices like phones, could be a laptop, could be your smart watches, your smart glasses. The concept of hybridization is the idea of a system that is built around the idea of coexistence between what runs on the cloud and what runs on the edge.
MAJOR DISCUSSION POINT
AI Integration in Telecommunications Infrastructure
Argument 2
Edge vs cloud decision-making should prioritize data privacy, user privacy, and responsiveness for edge processing
EXPLANATION
Dr. Sukumar outlines the criteria for determining what AI processing should happen on edge devices versus in the cloud. Edge processing should handle scenarios requiring data privacy, user privacy, better responsiveness, and predictable performance, while cloud processing is better suited for fleet management and AI/ML training operations.
EVIDENCE
if you’re looking at areas where you want to focus more on data privacy user privacy better responsiveness, better data management a predictable end to end performance to a large extent that’s going to be happening on the edge. And as you go more towards cloud, I would say the emphasis is going to be a lot more around fleet management, anything to do with AI, ML training as such.
MAJOR DISCUSSION POINT
Security and Ethical Considerations
DISAGREED WITH
Mr Pasi Toivanen
M
Mr Pasi Toivanen
3 arguments118 words per minute687 words348 seconds
Argument 1
Success in AI era requires building value-rich ecosystems with collaboration between technology players, regulators, and government agencies
EXPLANATION
Mr. Toivanen argues that no single entity, regardless of expertise, can fully capture all aspects of AI implementation alone. Success depends on creating comprehensive ecosystems where technology companies, regulators, and government agencies work together to understand opportunities, risks, and challenges while transparently distributing the value created.
EVIDENCE
I don’t believe that any player, even though they would be the smartest people on the planet, are able to fully capture the 360 by playing alone. Ecosystem it is. How you are able to proactively define the overall value of this AI evolution. And then transparently and proactively agree how that value is distributed, how that value is maximized within that ecosystem.
MAJOR DISCUSSION POINT
AI Integration in Telecommunications Infrastructure
AGREED WITH
Mr. Shantigram Jagannath
Argument 2
Decision-making should be pushed to network level for security and optimization, with limited cases going to edge or regional data centers
EXPLANATION
Mr. Toivanen advocates for keeping most AI decision-making within the network itself to improve efficiency and reduce complexity. He suggests that pushing more decisions to edge or regional data centers creates inefficiencies, so the preference should be for network-level automation with selective use of edge and minimal reliance on regional data centers.
EVIDENCE
I would push as much of the optimization automation to the network itself and then the limited cases to edge and less and less to the actual regional data center. More decisions are traveling further to the regional data centers, more we are contributing to inefficiency and hence also the complexity.
MAJOR DISCUSSION POINT
Network Optimization and Operational Efficiency
DISAGREED WITH
Dr. Vinesh Sukumar
Argument 3
Holistic ecosystem approach is necessary to address network evolution challenges rather than implementing patch fixes
EXPLANATION
Mr. Toivanen emphasizes that successful network evolution requires end-to-end thinking and ecosystem collaboration rather than isolated solutions. Without comprehensive planning involving all stakeholders, organizations end up implementing fragmented patch fixes that don’t address the fundamental challenges of network transformation.
EVIDENCE
without thinking the network evolution end-to-end, you are not able to address it. So it comes to this ecosystem of players that you are able to model what the change is introducing to the network and optimize the network end-to-end. Otherwise, you are going to put patch fixes here and there based on certain application behaviors and you are not able to evolve the whole network.
MAJOR DISCUSSION POINT
Practical Implementation Challenges
M
Mr. Shantigram Jagannath
3 arguments138 words per minute1391 words602 seconds
Argument 1
AI adoption framework should focus on cost optimization (CAPEX/OPEX) and revenue generation through product enhancement and AI-as-a-service models
EXPLANATION
Mr. Jagannath proposes a structured framework for telecom operators to evaluate AI investments based on two main criteria: cost optimization (both capital and operational expenses) and revenue generation opportunities. He suggests revenue can come from enhancing existing telecom products and creating new AI-as-a-service business models.
EVIDENCE
So you can either be looking at cost, or you can be looking at revenue. Or both. When it comes to cost, it is optimization of either the CAPEX or optimization of OPEX in a simplistic sense. And on the other side, you have a possibility of generating revenue by providing AI through the telecom network.
MAJOR DISCUSSION POINT
Network Optimization and Operational Efficiency
Argument 2
Indian telecom faces unique challenge of leveraging fresh equipment investments while implementing AI, requiring choice between AI-first architecture or bolt-on capabilities
EXPLANATION
Mr. Jagannath highlights a specific challenge for Indian telecom operators who have made recent equipment investments and need to leverage these for the next 10 years. Unlike mature Western networks that have completed capital cycles, Indian operators must choose between implementing AI-native architectures or adding AI capabilities to existing equipment.
EVIDENCE
unlike some of the more mature networks in the West where the capital cycle has already gone through you know 4g 5g and it’s it’s paid off over 5 10 years already but in India we have made fresh investments even as early as last year now that kind of equipment if it has to be leveraged for the next 10 years how do you deliver AI that is one challenge that we’ll have to solve
MAJOR DISCUSSION POINT
Practical Implementation Challenges
DISAGREED WITH
Magnus Ewerbring
Argument 3
Network evolution must be planned end-to-end to handle transition from 118 crore human users to potentially 500 crore AI agents in the future
EXPLANATION
Mr. Jagannath envisions a fundamental shift in network usage patterns, where AI agents will eventually dominate network traffic rather than human users. This transformation requires rethinking business models, economics, and policy frameworks to handle the massive scale increase and different usage patterns of AI agents.
EVIDENCE
today we are telecom networks are mostly catering to human users. Three, four years, five years down the road I do expect AI users to be starting to dominate. today we have 118 crores mobile phones. Five years from now we might actually have 500 crores of AI agents which are doing various things but they are still communicating either with each other or with central data repositories
MAJOR DISCUSSION POINT
Practical Implementation Challenges
AGREED WITH
Mr Pasi Toivanen
M
Ms. Pallavi Mishra
2 arguments56 words per minute601 words638 seconds
Argument 1
AI in telecommunications enables networks that can self-heal, detect faults proactively, and deliver seamless connectivity to billions without interruption
EXPLANATION
Ms. Mishra presents a vision of AI-powered telecom networks that can autonomously maintain themselves and predict problems before they occur. She emphasizes that this transformative capability is not science fiction but represents the current potential of AI in telecommunications.
EVIDENCE
Kalpna kijiye ek aisa telecom network jo khud ko heal kar sakhe that can detect faults even before we know them and deliver seamless connectivity to billions without interruption. This is not a science fiction. This is the power of AI in telecommunication.
MAJOR DISCUSSION POINT
AI Integration in Telecommunications Infrastructure
AGREED WITH
Shri Anil Kumar Lahoti, Magnus Ewerbring
Argument 2
AI is transforming industries with humongous possibilities in telecommunications, from predictive network management to intelligent customer experiences
EXPLANATION
Ms. Mishra argues that AI’s transformative impact extends across industries, with telecommunications being a key beneficiary. She highlights the vast potential for AI applications in both network operations and customer service enhancement.
EVIDENCE
Today, AI is transforming industries. And as we look ahead, AI is all set to become even more transformative. From predictive network management to intelligent customer experiences, the possibilities are humongous.
MAJOR DISCUSSION POINT
Current AI Applications and Benefits
S
Shri Ritu Ranjan Mittar
3 arguments119 words per minute743 words374 seconds
Argument 1
AI implementation in handsets will create new challenges for telecom networks that need to be addressed
EXPLANATION
Shri Mittar raises concerns about the impact of AI-enabled devices on network infrastructure. He suggests that as AI capabilities are integrated into handsets and mobile devices, this will create new demands and challenges for the underlying telecom networks that operators must prepare for.
EVIDENCE
Another thing I would also request you to dwell on is that ultimately the AI is going to come on the handsets. So once AI is going to come in on the handsets, what kind of a challenge it will throw to the network?
MAJOR DISCUSSION POINT
Practical Implementation Challenges
Argument 2
Security concerns arise from AI being potentially used to attack telecom networks, requiring proactive defensive measures
EXPLANATION
Shri Mittar highlights the dual nature of AI as both a tool for network enhancement and a potential security threat. He emphasizes the need for telecom operators and equipment manufacturers to develop strategies to protect networks from AI-powered attacks.
EVIDENCE
Are we going to be challenged by the AI being used for attacking the networks? And what kind of steps we intend to take?
MAJOR DISCUSSION POINT
Security and Ethical Considerations
AGREED WITH
Shri Anil Kumar Lahoti
Argument 3
AI’s compute-intensive nature raises sustainability concerns that must be addressed in alignment with UN Sustainable Development Goals
EXPLANATION
Shri Mittar points out that AI applications require significant computational resources, which has implications for energy consumption and environmental sustainability. He emphasizes that telecom operators and equipment manufacturers must consider sustainability as they implement AI solutions, particularly given their commitments to UN SDGs.
EVIDENCE
Another one thing with the AI is it is compute intensive. So the sustainability as also is listed is going to be important. You will be all are kind of signatories to the or the countries signatories to the sustainable development goals of the UN so this aspect also is very important
MAJOR DISCUSSION POINT
Environmental and Sustainability Considerations
A
Audience
1 argument134 words per minute149 words66 seconds
Argument 1
Introducing AI into India’s existing 118 crore mobile connections presents significant disruption risks that need careful management during the transition
EXPLANATION
The audience member raises concerns about the practical challenges of implementing AI across India’s massive existing telecom infrastructure. They emphasize that any disruption to the current network serving 118 crore connections would cause significant problems and resource losses, making the transition strategy critical.
EVIDENCE
as I was hearing all that, we are having around 118 crores of mobile connections in India, built up over a huge network of say fiber and wireless and lead circuits and everything and once we are trying to introduce AI in that, as we know it’s not AI native, we have to build up in the form of external like apps and all and any single minute of disruption causes a huge resentment and the loss of the you know, all the time and resources
MAJOR DISCUSSION POINT
Practical Implementation Challenges
Agreements
Agreement Points
AI is becoming foundational and intrinsic to telecom networks rather than just an add-on application
Speakers: Shri Anil Kumar Lahoti, Magnus Ewerbring, Ms. Pallavi Mishra
AI is becoming foundational to telecom networks, not just an add-on, with 6G networks being AI-native Networks are progressing toward level 4 autonomy by 2028, with 6G targeting fully autonomous level 5 operations AI in telecommunications enables networks that can self-heal, detect faults proactively, and deliver seamless connectivity to billions without interruption
All speakers agree that AI represents a fundamental shift in telecom architecture, moving from being an application layer to becoming intrinsic to network operations, with 6G networks being AI-native from the ground up
AI provides significant operational benefits including energy efficiency and network optimization
Speakers: Shri Anil Kumar Lahoti, Magnus Ewerbring
Operators are achieving significant energy savings and have disconnected 2.1 million spam numbers through AI and blockchain-based filtering AI enables significant improvements including 10% capacity optimization in link adaptation and 33% energy efficiency gains in network operations
Both speakers provide concrete evidence of AI’s current benefits in telecom operations, with measurable improvements in energy efficiency and network performance optimization
Ecosystem collaboration is essential for successful AI implementation in telecommunications
Speakers: Mr Pasi Toivanen, Mr. Shantigram Jagannath
Success in AI era requires building value-rich ecosystems with collaboration between technology players, regulators, and government agencies Network evolution must be planned end-to-end to handle transition from 118 crore human users to potentially 500 crore AI agents in the future
Both speakers emphasize that AI implementation cannot be achieved in isolation and requires comprehensive collaboration between multiple stakeholders including technology companies, regulators, and government agencies
Security and trust are fundamental concerns that must be addressed in AI deployment
Speakers: Shri Anil Kumar Lahoti, Shri Ritu Ranjan Mittar
Trust must remain central to AI adoption, with transparency, accountability, and consumer rights protection as core principles Security concerns arise from AI being potentially used to attack telecom networks, requiring proactive defensive measures
Both speakers highlight that security and trust are not optional considerations but fundamental requirements for AI deployment in telecommunications, requiring proactive measures and strong governance frameworks
Similar Viewpoints
Both speakers advocate for intelligent distribution of AI processing, with edge processing handling privacy-sensitive and latency-critical operations while minimizing reliance on distant data centers for efficiency
Speakers: Dr. Vinesh Sukumar, Mr Pasi Toivanen
Edge vs cloud decision-making should prioritize data privacy, user privacy, and responsiveness for edge processing Decision-making should be pushed to network level for security and optimization, with limited cases going to edge or regional data centers
Both recognize the specific challenges India faces in implementing AI across its massive existing telecom infrastructure, emphasizing the need for careful transition strategies that minimize disruption while leveraging recent investments
Speakers: Mr. Shantigram Jagannath, Audience
Indian telecom faces unique challenge of leveraging fresh equipment investments while implementing AI, requiring choice between AI-first architecture or bolt-on capabilities Introducing AI into India’s existing 118 crore mobile connections presents significant disruption risks that need careful management during the transition
Both speakers view India as being in an advantageous position for AI implementation in telecommunications, with strong existing infrastructure providing a foundation for transformative AI applications
Speakers: Magnus Ewerbring, Ms. Pallavi Mishra
India’s 5G coverage of over 90% population provides a strong platform for AI innovation and positions India advantageously for 6G transition AI is transforming industries with humongous possibilities in telecommunications, from predictive network management to intelligent customer experiences
Unexpected Consensus
Sustainability concerns in AI implementation
Speakers: Shri Ritu Ranjan Mittar, Magnus Ewerbring
AI’s compute-intensive nature raises sustainability concerns that must be addressed in alignment with UN Sustainable Development Goals AI enables significant improvements including 10% capacity optimization in link adaptation and 33% energy efficiency gains in network operations
While one speaker raises concerns about AI’s energy consumption, another provides evidence of AI actually improving energy efficiency, creating an unexpected consensus that sustainability must be actively managed rather than being inherently problematic
Risk-based regulatory approach for AI applications
Speakers: Shri Anil Kumar Lahoti, Dr. Vinesh Sukumar
TRAI has implemented risk-based regulatory framework for AI in telecom, with stronger obligations for high-risk applications affecting consumers Edge vs cloud decision-making should prioritize data privacy, user privacy, and responsiveness for edge processing
The regulatory authority and technology company unexpectedly align on the principle that different AI applications require different levels of oversight and processing approaches based on their risk profiles and privacy implications
Overall Assessment

The speakers demonstrate strong consensus on AI being foundational to telecom’s future, the need for ecosystem collaboration, and the importance of balancing innovation with security and trust. There is agreement on India’s advantageous position and the practical benefits AI already provides.

High level of consensus with complementary perspectives rather than conflicting views. The alignment suggests a mature understanding of AI’s role in telecommunications and readiness for coordinated implementation across regulatory, technology, and operational domains.

Differences
Different Viewpoints
Where AI decision-making should be located in network architecture
Speakers: Dr. Vinesh Sukumar, Mr Pasi Toivanen
Edge vs cloud decision-making should prioritize data privacy, user privacy, and responsiveness for edge processing Decision-making should be pushed to network level for security and optimization, with limited cases going to edge or regional data centers
Dr. Sukumar advocates for edge processing for privacy and responsiveness concerns, while Mr. Toivanen argues for keeping most decisions at the network level to avoid inefficiency and complexity
Implementation approach for AI in existing telecom infrastructure
Speakers: Mr. Shantigram Jagannath, Magnus Ewerbring
Indian telecom faces unique challenge of leveraging fresh equipment investments while implementing AI, requiring choice between AI-first architecture or bolt-on capabilities AI enables significant improvements including 10% capacity optimization in link adaptation and 33% energy efficiency gains in network operations
Mr. Jagannath emphasizes the practical constraints of recent equipment investments in India requiring bolt-on solutions, while Magnus focuses on the benefits achievable through AI optimization without addressing implementation constraints
Unexpected Differences
Scale of future network users and business model implications
Speakers: Mr. Shantigram Jagannath, Other panelists
Network evolution must be planned end-to-end to handle transition from 118 crore human users to potentially 500 crore AI agents in the future Various arguments about current AI implementation and optimization
Mr. Jagannath’s projection of AI agents potentially outnumbering human users by 4:1 within 5 years represents a dramatically different vision of network evolution that other speakers did not address, suggesting disagreement on the timeline and scale of AI agent proliferation
Overall Assessment

The main areas of disagreement center on technical architecture decisions (edge vs network-level processing), implementation approaches for existing infrastructure, and the scale/timeline of AI transformation

Moderate disagreement level with significant implications – while speakers agree on AI’s transformative potential and the need for responsible implementation, their different approaches to technical architecture and implementation could lead to incompatible solutions and fragmented ecosystem development

Partial Agreements
Both speakers agree on the need for comprehensive, end-to-end planning and ecosystem collaboration, but disagree on the specific approach – Toivanen emphasizes ecosystem value distribution while Jagannath focuses on business model and policy framework changes
Speakers: Mr Pasi Toivanen, Mr. Shantigram Jagannath
Success in AI era requires building value-rich ecosystems with collaboration between technology players, regulators, and government agencies Network evolution must be planned end-to-end to handle transition from 118 crore human users to potentially 500 crore AI agents in the future
Both agree on the importance of privacy and user protection, but disagree on implementation – Lahoti emphasizes regulatory frameworks and transparency while Sukumar focuses on technical architecture solutions through edge processing
Speakers: Shri Anil Kumar Lahoti, Dr. Vinesh Sukumar
Trust must remain central to AI adoption, with transparency, accountability, and consumer rights protection as core principles Edge vs cloud decision-making should prioritize data privacy, user privacy, and responsiveness for edge processing
Takeaways
Key takeaways
AI is transitioning from an add-on to a foundational capability in telecommunications, with 6G networks expected to be AI-native India’s extensive 5G coverage (90%+ population) positions it advantageously for AI innovation and 6G transition Trust, transparency, and accountability must remain central to AI adoption in telecom, given the scale of impact on millions of users TRAI has established a risk-based regulatory framework for AI in telecom, with stronger obligations for high-risk consumer-facing applications AI is already delivering significant benefits including 10% capacity optimization, 33% energy efficiency gains, and blocking 400 million spam communications daily Success in the AI era requires building collaborative ecosystems between technology players, regulators, and government agencies rather than individual efforts Hybrid AI systems combining edge and cloud capabilities are emerging as the preferred approach for balancing performance, privacy, and efficiency The telecom industry must prepare for a fundamental shift from serving primarily human users to potentially handling 500 crore AI agents in the future
Resolutions and action items
TRAI to continue working with stakeholders, industry, policymakers and international partners to ensure AI serves both innovation and public good Industry to target level 4 network autonomy by 2028 with progression toward fully autonomous level 5 operations in 6G Continued rollout of digital consent acquisition framework following successful pilot runs with banks Development of more sophisticated network management capabilities to create different network slices for various use cases Investment in denser fiber optic infrastructure carrying 8-20 terabytes to support future AI traffic demands
Unresolved issues
How to balance leveraging fresh equipment investments in India while implementing AI (AI-first architecture vs bolt-on capabilities) Determining optimal decision-making distribution between network edge, regional data centers, and cloud infrastructure Establishing economic models and pricing frameworks for AI agent communications and services Ensuring equitable bandwidth distribution between urban and rural areas when AI optimizes network resources Managing the transition from current human-centric networks to AI agent-dominated traffic patterns Addressing security vulnerabilities that may emerge from AI-driven network attacks Developing dynamic routing capabilities for hybrid AI systems that can intelligently switch between edge and cloud processing
Suggested compromises
Implementing a risk-based regulatory approach where low-risk AI applications use self-regulation while high-risk applications require stronger oversight Using regulatory sandbox approach to enable innovation while maintaining public interest protection Adopting hybrid AI models that balance edge processing for privacy/responsiveness with cloud processing for complex operations and fleet management Creating sophisticated network slicing capabilities to ensure fair resource allocation across different user segments and geographic areas Developing telecom networks as platforms (app store model) where AI applications can be dynamically uploaded while maintaining trust and safety frameworks
Thought Provoking Comments
In the upcoming 6G technology, AI will no longer be an application layer. It will be intrinsic. The telecom networks will be AI native… telecom networks are no longer mere data carriers, but these are central pillar of India’s AI infrastructure.
This comment fundamentally reframes the relationship between AI and telecom networks from AI being an add-on service to being the foundational architecture. It shifts the paradigm from networks carrying AI applications to networks being inherently intelligent.
This set the conceptual foundation for the entire discussion, establishing that the conversation wasn’t about integrating AI into existing networks, but about reimagining networks as AI-native infrastructure. All subsequent panelist discussions built upon this fundamental shift in thinking.
Speaker: Shri Anil Kumar Lahoti (TRAI Chairman)
automated decisions taken by algorithms can affect millions of users simultaneously this makes trust the central pillar of AI adoption in telecommunication efficiency gains cannot come at the cost of transparency, accountability or consumer rights
This insight highlights the unique scale challenge in telecom AI deployment – unlike other sectors where AI affects individual users, telecom AI decisions can simultaneously impact millions. It introduces the critical tension between efficiency and accountability.
This comment shifted the discussion from purely technical considerations to ethical and governance frameworks. It established trust as a non-negotiable requirement and influenced later discussions about security, transparency, and responsible deployment.
Speaker: Shri Anil Kumar Lahoti (TRAI Chairman)
Who is able to gather like-minded technology players, regulators, government agencies to think together the opportunity, the risks, the challenges, that whole 360 of AI. I don’t believe that any player, even though they would be the smartest people on the planet, are able to fully capture the 360 by playing alone. Ecosystem it is.
This comment challenges the traditional competitive approach in telecom by arguing that AI’s complexity requires unprecedented collaboration. It suggests that the AI era demands a fundamental shift from competition to ecosystem thinking.
This reframed the discussion from individual company capabilities to collaborative ecosystem development. It influenced the conversation toward shared responsibility and collective problem-solving, moving away from vendor-specific solutions to industry-wide cooperation.
Speaker: Mr Pasi Toivanen (Nokia)
So the telecom network essentially becomes a platform where simple models, easy models can be actually uploaded and made accessible to all the users of the telecom network… like an app store model.
This introduces a revolutionary business model concept – transforming telecom networks from connectivity providers to AI platform operators. It envisions networks as marketplaces connecting AI developers with end users, particularly focusing on India’s ‘bottom of the pyramid’ population.
This comment introduced an entirely new revenue model and democratization framework that hadn’t been discussed before. It shifted the conversation from network optimization to network transformation as a platform business, influencing the final audience question about scaling from 118 crore connections to potentially 500 crore AI agents.
Speaker: Mr. Shantigram Jagannath (Tejas Networks)
Three, four years, five years down the road I do expect AI users to be starting to dominate. So the business model and the regulation has to support that kind of an evolution… Five years from now we might actually have 500 crores of AI agents which are doing various things but they are still communicating either with each other or with central data repositories.
This projection fundamentally challenges current assumptions about network users, suggesting a shift from human-centric to AI-agent-centric networks. It quantifies the scale of transformation (from 118 crore human users to 500 crore AI agents) and highlights the regulatory and economic implications.
This comment provided a concrete vision of the future that required rethinking everything from pricing models to network capacity planning. It elevated the discussion from current AI integration challenges to future-state planning and policy framework development.
Speaker: Mr. Shantigram Jagannath (Tejas Networks)
The concept of hybridization… working with our network operators is also not an easy concept. It’s always been a challenge to understand which of these experiences would be transitioned towards the cloud, how do you make that decisions, and what runs on the edge.
This comment identifies a critical technical and architectural challenge that goes beyond simple edge vs. cloud decisions. It introduces the complexity of dynamic, intelligent routing of AI workloads based on context, user needs, and network conditions.
This deepened the technical discussion by highlighting that AI in telecom isn’t just about deployment location, but about creating intelligent systems that can dynamically optimize where processing occurs. It influenced subsequent discussions about network intelligence and decision-making frameworks.
Speaker: Dr. Vinesh Sukumar (Qualcomm)
Overall Assessment

These key comments fundamentally transformed the discussion from a technical implementation conversation to a strategic reimagining of telecom’s role in the AI era. The Chairman’s opening comments established AI-native networks as the new paradigm, while subsequent panelist insights built layers of complexity around ecosystem collaboration, platform business models, and the transition from human-centric to AI-agent-centric networks. The discussion evolved from ‘how to add AI to telecom’ to ‘how to rebuild telecom as AI infrastructure,’ with each insightful comment adding dimensions of scale, trust, collaboration, and future vision. The conversation successfully bridged technical capabilities with policy implications, business model innovation, and societal impact, particularly emphasizing India’s unique position to lead this transformation while serving its diverse population pyramid.

Follow-up Questions
How will AI-driven automation reach level 4 autonomy by 2028 and what specific challenges need to be overcome?
This represents a significant industry goal that requires detailed roadmap and implementation strategies
Speaker: Magnus Ewerbring
How can hybrid AI systems intelligently route workloads between edge and cloud in real-time during multi-turn conversations?
This is identified as a major research challenge that needs breakthrough solutions for dynamic workload distribution
Speaker: Dr. Vinesh Sukumar
What specific security vulnerabilities will emerge from AI-native networks and how should they be addressed proactively?
Security risks in AI-enabled networks will be fundamentally different and require new approaches for protection
Speaker: Mr Pasi Toivanen
How can telecom networks ensure equitable AI service distribution between urban and rural areas?
This addresses critical ethical concerns about AI access equality across different geographic and economic segments
Speaker: Shri Ritu Ranjan Mittar
What business models and regulatory frameworks are needed to support AI agents as dominant network users?
The transition from human-centric to AI agent-dominated networks requires fundamental rethinking of economics and governance
Speaker: Mr. Shantigram Jagannath
How can existing telecom infrastructure be upgraded to AI-native systems without service disruption to 1.18 billion connections?
This represents a massive technical and operational challenge requiring careful migration strategies
Speaker: Audience member
What are the optimal architectures for AI-first versus bolt-on AI implementations in telecom networks?
Network operators need guidance on whether to rebuild with AI-native architecture or enhance existing infrastructure
Speaker: Mr. Shantigram Jagannath
How will 6G networks achieve full level 5 autonomy and what standards need to be developed?
This represents the next evolution beyond current AI implementations and requires international coordination
Speaker: Magnus Ewerbring
What specific mechanisms will enable telecom networks to function as AI application platforms similar to app stores?
This new business model concept requires detailed technical and commercial frameworks
Speaker: Mr. Shantigram Jagannath
How can voice biometric authentication be implemented securely across telecom networks while protecting user privacy?
This emerging application raises important questions about privacy, security, and implementation at scale
Speaker: Mr. Shantigram Jagannath

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Safeguarding Children with Responsible AI

Safeguarding Children with Responsible AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion at the India AI Impact Summit focused on the responsible development and governance of AI systems for children, emphasizing the need for proactive safety measures rather than reactive regulation. Baroness Joanna Shields opened by arguing that AI governance for children represents the clearest test of responsible technology development, warning that children must not become “beta testers” for AI systems that simulate intimacy and human connection at unprecedented scale.


Young AI innovator Rahul John Aju provided a child’s perspective, emphasizing the importance of teaching foundational skills before introducing AI tools, comparing it to learning basic mathematics before using calculators. He stressed that schools should teach children “how to think” rather than “what to think,” while highlighting the massive demand for AI education among young people. The panel discussion featured experts from UNICEF, OpenAI, LEGO Education, and academic institutions who explored three key areas: AI’s potential to enhance learning through personalized education, the risks of over-dependency and cultural homogenization, and practical governance solutions.


Key recommendations included implementing “safety by design” principles with robust age verification systems, conducting real-world evaluations of AI systems in deployment contexts, and ensuring children’s active participation in governance decisions. OpenAI’s Chris Lehane outlined a comprehensive child safety package including age assurance, parental controls, and restrictions on targeted advertising. The panelists emphasized the need for AI literacy programs that empower both children and teachers, while warning against creating a technological monoculture that erases cultural diversity. The discussion concluded with calls for “inclusion by default” and treating children as partners rather than passive recipients in shaping AI’s future development.


Keypoints

Major Discussion Points:

AI Safety and Child Protection: The need to move from a “post-harm regulatory model” (reactive approach used with social media) to “safety by design” for AI systems, with emphasis on age-appropriate experiences, robust age assurance technology, and protecting children from simulated intimacy that they cannot distinguish from authentic human connection.


AI Literacy and Education: The importance of teaching children foundational skills before introducing AI tools, similar to learning basic math before using calculators. Discussion focused on empowering children to understand AI systems rather than just use them, with emphasis on critical thinking, agency, and personalized learning approaches.


Technical Evaluation and Real-World Testing: The necessity of conducting ongoing studies and evaluations of AI systems in real-world deployment contexts rather than just laboratory settings, including understanding how children are actually exposed to content, profiling, and commercial influences on platforms.


Global Governance and Cultural Diversity: Balancing the need for universal safety standards with respect for cultural contexts, while avoiding the risk of creating a “monoculture” dominated by models from the Global North. Discussion included the need for globally interoperable baseline protections while allowing for local customization.


Children as Active Participants: Emphasizing that children should be involved in the governance and design of AI systems rather than being passive recipients, recognizing their agency and ability to provide valuable feedback on what works and doesn’t work for them.


Overall Purpose:

The discussion aimed to establish frameworks for responsible AI development that prioritizes children’s safety, well-being, and agency. The session sought to move beyond theoretical principles to practical, enforceable measures for protecting children while harnessing AI’s potential to enhance learning, creativity, and access to knowledge.


Overall Tone:

The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved into energetic engagement during Rahul’s presentation, and settled into constructive problem-solving during the panel discussion. The tone remained collaborative and forward-looking, with participants acknowledging both tremendous opportunities and significant risks while emphasizing the need for immediate, thoughtful action rather than reactive measures.


Speakers

Speakers from the provided list:


Baroness Joanna Shields – Former UK government roles focused on Internet safety and harms, helped build major child online safety coalitions internationally


Rahul John Aju – Widely recognized as the “AI kid of India,” young AI innovator who has built and deployed real-world AI tools, founded his own AI startup (AIRM Technologies), and advised public institutions on using AI


Thomas Davin – Director of the Office of Innovation at UNICEF, co-moderator


Urvashi Aneja – Director of the Digital Futures Lab, co-moderator


Chris Lehane – Chief Global Affairs Officer for OpenAI


Tom Hall – Vice President and General Manager at Lego Education (works with the National Legal Foundation)


Maria Bielikova – Director of the Kempelen Institute for Intelligent Technologies, works on user modeling, personalization, and trustworthy AI, has spoken publicly about disinformation risks


Moderator – Session moderator/MC (main event moderator, distinct from the panel co-moderators)


Additional speakers:


None – all speakers mentioned in the transcript were included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This discussion at the India AI Impact Summit on day five brought together diverse stakeholders to address AI governance for children. The session was co-moderated by Thomas Davin, Director of the Office of Innovation at UNICEF, and Urvashi Aneja, Director of the Digital Futures Lab, though UN Undersecretary General Amandeep Gill was unable to attend due to Delhi traffic.


Opening: The Urgency of Proactive AI Governance

Baroness Joanna Shields opened with a stark warning about AI regulation, drawing from her 15 years of experience in technology policy. She argued that the “post-harm regulatory model” used with social media would be inadequate for artificial intelligence, stating that AI governance for children represents “the clearest test yet on whether we are governing this technology responsibly and for the public good.”


The Baroness introduced the concept of “artificial intimacy,” explaining that unlike social media platforms which facilitate user interactions, AI systems create direct, personalised relationships with children. These systems are “increasingly embedded in how children learn, communicate, create, and form their own sense of self,” creating “simulated intimacy and human-like interaction at a scale that is hard to imagine.” The risk lies in children’s developmental inability to distinguish between authentic human connection and artificial intimacy, particularly when AI systems are designed to be “persuasive, emotionally responsive, and always available.”


She emphasised that children must not become “beta testers for our AI-enabled world,” noting early indicators of harm including “emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss.”


A Youth Perspective on AI Literacy

Rahul John Aju, introduced as “the AI kid of India,” brought essential youth perspective to the discussion. Having bunked his exam to attend, Rahul demonstrated the agency that would become a key theme. He is the founder of AIRM Technologies and ThinkCraft Academy, and has created tools like “Rescue AI” while reaching 7 lakh people through his educational content.


Rahul’s core argument centred on educational transformation. He observed that while his father taught him to “question everything,” AI has created a situation where “even parents can’t figure out what is the right information and fake information.” He advocated for mastering foundational skills before introducing AI tools, noting: “I only got access to [calculators] once I learned the basics of maths. I believe AI should be the same. We should learn how to write essays. We should learn how to sing, maybe. Then you should use AI.”


He argued that “schools teach us what to think but I believe schools should teach us how to think,” emphasising critical thinking over rote learning. Rahul demonstrated practical AI literacy by showing the audience tools like Notebook LM and StudyFetch, while warning about trends like using AI for Ghibli-style content creation without understanding the implications.


Industry Perspectives on Safety and Education

Chris Lehane from OpenAI described AI as “an incredibly leveling technology” that could provide every child with their own AI tutor, capable of individualised teaching adapted to different learning styles. He argued that current educational systems, designed for the industrial age, are misaligned with an AI-enabled future that will reward individual agency and creativity.


OpenAI’s child safety approach includes age assurance technology that defaults users to under-18 models when age cannot be determined, comprehensive parental controls, prohibition of targeted advertising to children, and external review processes. The company has also restricted “kid-specific bots” until adequate guardrails are established, acknowledging particular risks of AI systems designed to form relationships with children.


Tom Hall from LEGO Education highlighted a critical implementation gap: while 80% of teachers recognise AI literacy as foundational, only 41% feel prepared to teach it. His approach emphasises empowerment over restriction, describing it as “handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood.” LEGO Education has developed policy toolkits to support educators in this transition.


Research Evidence and Real-World Impact

Maria Bielikova, Director of the Kempelen Institute for Intelligent Technologies, brought empirical evidence highlighting gaps between policy intentions and outcomes. Her research on children’s TikTok exposure revealed that while children see fewer formal advertisements, they are “exposed five times more to profiling to the topics with influencers and so on” – circumventing traditional advertising restrictions.


This finding supported her argument that current AI systems are “so complex that we cannot actually measure something that we don’t know. We can observe it and this is quite important to do a lot of studies.” She called for independent evaluation studies rather than relying solely on company analytics, referencing frameworks like the Digital Service Act in Europe.


Bielikova offered a memorable analogy: “It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment.” This suggests guided exploration rather than blanket prohibitions.


Governance Challenges and Cultural Considerations

The discussion revealed consensus on fundamental principles but complexity in implementation. Baroness Shields warned about cultural homogenisation if AI models primarily reflect Global North perspectives, risking the development of a “monoculture” that would mean “we will lose so much of our cultural diversity, our uniqueness as people.”


She proposed technical solutions through the Open Age Alliance’s development of portable “age keys” that travel with children across platforms, enabling graduated responses rather than blanket age restrictions. However, Lehane acknowledged that privacy regulations, particularly in Europe, create limitations for age assurance technologies.


Unresolved Questions and Future Challenges

Thomas Davin raised a provocative concern about AI’s effectiveness potentially harming development: “Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know that grit is going to be one of the huge skills of tomorrow?” This highlighted tensions between helpful assistance and preserving essential human capabilities.


The discussion also grappled with a sobering statistic Davin shared: “7 out of 10 children in classrooms cannot explain to us a text that they read at 10 years of age,” underscoring the urgency of educational transformation.


Key Principles and Next Steps

Three core directions emerged from the discussion:


Safety by Design: Proactive safety measures built into AI systems from the outset, including age-appropriate content, robust privacy protections, and effective redress mechanisms.


Cultural Inclusion: AI development that represents diverse perspectives, languages, and contexts, including solutions that work for offline and unconnected populations.


Children as Partners: Moving beyond paternalistic protection to include children as active participants in governance, as demonstrated by Rahul’s meaningful engagement with complex policy questions.


Conclusion

The session demonstrated that effective AI governance for children requires technical solutions, regulatory frameworks, and fundamental reimagining of technology development approaches. The convergence between industry and advocacy perspectives on core principles suggests maturing understanding, while unresolved implementation questions indicate significant work remains.


As Davin concluded with “measured optimism,” the discussion highlighted both tremendous potential and significant risks, emphasising the need for continuous evaluation and adaptation as AI systems become more sophisticated and pervasive. The challenge lies in translating these insights into concrete policies that can keep pace with technological advancement while preserving human agency and cultural diversity.


Session transcriptComplete transcript of the session
Baroness Joanna Shields

governance. How we manage AI on behalf of children will be the clearest test yet on whether we are governing this technology responsibly and for the public good. AI’s rapid adoption has been driven by extraordinary capabilities, but its continued place in society will depend on trust, and trust is built through responsible design. The post -harm regulatory model that we’ve seen with social media reacting after damage is not fit for purpose in the AI world. AI is fundamentally different. It is not a platform. It is increasingly a one -to -one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. Inadvertently, AI is engineering simulated intimacy and human -like interaction at a scale that is not just a matter of how children learn, but how they learn.

It is hard to imagine. When a model says to a child, I care, I understand, that’s not conscience, that’s code. But for a child, it can feel very real. And children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available. That difference has implications not only for safety, but for mental health, identity formation, and long -term well -being. We have already seen what happens when the line blurs. Emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss. Children must not be the beta testers for our AI -enabled world. We need age -appropriate experiences by default. with guardrails around systems that simulate intimacy without accountability.

The question is not whether AI will continue to advance. Of course it will. The question is whether we shape it in a way that safeguards the dignity and the development of children. And accountability begins with protection. And I’m excited to join this distinguished panel to have this important conversation, even though it’s day five of the summit. Thank you very much. I’m going to have to move this back up. I’m sorry.

Moderator

Thank you so much, Baroness Joanna Shields, for setting the stakes so clearly. Too often, discussions about children and technology speak about children rather than with them. This session is intentional in doing otherwise. Therefore, I am very pleased to introduce Rahul John Aju, widely recognized as the AI kid of India. He is our featured young AI innovator who has built and deployed real -world AI tools, founded his own AI startup, and advised public institutions on using AI. Raul, I’d like to invite you on stage.

Rahul John Aju

Thank you. Thank you, guys. Thank you so much for the lovely introduction. I know safety is a bit boring topic, but it’s a very crucial topic. And I think if I stand there, no one is going to see me, so I’m using a hand mic. So hopefully everyone can see me. Yes? Can I get more energy? Hi, guys. Is this all you guys have? Hi. Perfect. So let’s get started. Starting with, you know, when I was young, my father used to tell me… Okay, I’m still young. I’m still young. Younger, younger. That’s what I bet. He still tells me that Raul, question everything. Be critical about everything. The slide changer is not working. Okay, without the slide changer also it will work.

Okay. Be critical about everything. Ask questions. So I did. Why does the chair have four legs? Why is the sky blue? And also, why do birds fly? Why can’t humans fly then? I bombarded him with a lot of questions. So he just took the phone and he’s like, Raul, this is Google. Go search it. And so I did. But you know, while I was using Google, my parents also taught me one thing. How to figure out what is the correct information and the fake information. And that helped me a lot. But this age of AI, how do you expect me to do it? I don’t think even parents can figure out what is the right information and fake information.

We all agree upon that? Yes? So how do we do that? Because curiosity is there in every child. I think I have enough curiosity. But it only becomes powerful if it’s guided the right way. So how do we guide the right way? Because right now we are just teaching kids how to talk to machines. Before we teach them how to… Question. Now I am just saying random quotes now but let’s dive deeper and see why. I will give an example. Everyone remembers the Ghibli trend? Everyone did it? I did it too. Guilty. But it was very fun to be honest. But what happened there was we were all just taking pictures, uploading our pictures to the cloud.

But we don’t even know what’s happening with it. We all agree, right? But right now kids are also doing the same thing, taking their pictures, uploading it to the cloud. But we tell children don’t be on social media, don’t upload your pictures to social media, don’t share your pictures to strangers and all, right? But what about the AI world? We are missing, the parallel is missing. We need to translate real world safety into the digital world. Because right now even most, okay, I have a question. How many of you guys read the 25 page terms and conditions? I don’t, right? You don’t know what’s happening behind the scenes. I don’t know what’s happening behind the scenes.

like most of these pictures were taken and obviously made for the model to be better for all of us, right? Right now a lot of companies are making sure children are safe but we don’t know about it. Are they safe? There are a lot of unknown AI companies as well. What do we do then? That’s right. Also I created an AI software where you can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and it will, thank you and it will literally tell them what to do, if you should use the product or not, right? So be careful. Anyway, so that tool was known as Rescue AI.

I’ve been working on it for the past three years for emergency, for law people, a lot of things. I don’t want to promote myself too much but I’m trying to do that. But what about when things like that are not there? What about if I didn’t do something like that? That is why AI awareness and safety is necessary. Obviously it is. That’s why you’re called here, Raul. But how do you do that education? Right? How do you teach about AI? You know, recently I got calculator in my school and I am so happy because I don’t have to do maths by multiplying, dividing manually. I can do it through calculators in my exam. By the way, I bunked my exam and came today.

Anyway, very happy for that. But you have to do all this calculation. But because I have a calculator, it’s way easier. But I only got access to it once I learned the basics of maths, right? I believe AS should be same. We should learn how to write essays. We should learn how to sing, maybe. Then you should, I don’t know how to sing. Everyone will run away if I start singing. But you should know the basics and the foundations before you start using AI. I feel that’s when you teach about AI. That’s when you say, okay, AI can help you do the essay. AI can help you do the song. You should use the natural intelligence first.

Then start using artificial intelligence. I believe. It’s about using the combination of both, right? Yes. How many of you guys use natural intelligence? Everyone does, right? I’m mostly reliant on artificial intelligence. I’ve got to switch to it. But that’s what matters. But it’s not just about that. It’s also about how we teach, deliver topics. Starting with personalized content. You know, reading for me is kind of boring. I’m so sorry. But everyone learns differently. It might be through reading. It might be through listening. It might be through watching videos, which I prefer the most. That’s how I learn most of the things that are happening. From geopolitics to cricket, which I love. All of these things I’ve learned because I watch the video.

I’m a more visual person. It’s not one size fits all. But sadly, I feel educationist. And I believe AI can generate content. Wait. It’s not believe. It’s already happening. You guys know about Notebook LM, right? It can generate videos. It can generate podcasts with one textbook content. That’s how I passed all my exams, to be honest. Even not just that, there is this tool StudyFetch where you can upload a chapter content and it will convert it into games. It’s not just about that. Everyone’s interest is different, right? Take a wild guess. What do you think my interests are? Wild guess. AI. AI, exactly. I am here to talk about AI guys. Cricket on the side but AI, right?

What if you connected E is equal to MC square and thought that through AI? You can do that too in this AI world. How do you do that? See, right now schools teach us what to think. I am repeating that. Schools teach us what to think but I believe schools should teach us how to think. How to think and how to think critical, how to think critically and how to face failures, how to communicate. These are basic things. Trust me, to stand here I had to face a lot of failures. But I learned how to do that because of my father. Trust me. I am giving you some credit. So, thank you. See, now he’s recording the audience, clapping for him.

Okay. So, that is what matters. And here’s one proof of demand, okay. I started something under my company, AIRM Technologies, ThinkCraft Academy. Yes, a bit of promoting, but ThinkCraft Academy, where I taught what is AI to building your own AI, LLM, fine tuning and all that, that in 30 days and more than 7 lakh people learned from that. And that course was completely free. And even there was another course going from what is AI to building your own AI as a startup founder, as a student. And that course was also completely free. But do you know how many people joined and learned from that? Again, 7 lakh people did, combined. that shows that people want to learn about AI.

It just has to be delivered the right way. The name of this course, I know everyone is searching right now. It’s on my YouTube channel. I’m a content creator too. Raul the Rockstar. Yes, you might be thinking, what does he not do, right? I’m joking. But a lot of things goes on. See, I am not saying a lot of big things. I believe we all should be open mind. We should be open to learning more things. We should be curious because AI will not take your job. But someone using AI can. But at the same time, the most important thing in the world of AI is also to be as human as possible. My name is Raul.

Thank you so much. Is it okay if I take a small video? Influencer. Thank you so much. I have to do this too, guys. So it’s very simple. Like I said, I have to do this. totally forced to I am just going to say AI Impact Summit how was the session and you guys can be if you guys didn’t like it just say no hated it you guys can say that be fully honest I should say you and also right I am totally joking I am very grateful for this opportunity you know in last November I was wanting to come here I was like register for this and the fact that they called me to speak here I am very grateful for this opportunity and we have to thank them thank you shall we do it AI Impact Summit Delhi by UN ok not by ok what’s the worry it’s a part right ok this is how many times I have to record a normal video thank you so much UN for calling me and AI Impact Summit the audience how was the session was it boring yes was it boring you guys are agreeing it’s boring no thank you guys thank you I will not take too much time

Moderator

Thank you, Rahul, for that very thoughtful and energizing address. Your perspective underscores a key message for today. The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly. We will now turn to our panel discussion. The discussion will be guided by two co -moderators with deep expertise at the intersection of innovation, policy, and child well -being. I am pleased to introduce our moderators, Thomas Davin, Director of the Office of Innovation at UNICEF, as well as Urvashi Aneja, Director of the Digital Futures Lab, and I invite them to guide the discussion.

Thomas Davin

Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and I’m… delighted to invite four leaders in the industry who are going to have the high bar of keeping you all as entertained and on substance as Raoul just did over now. So please, a warm welcome to Baroness Joanna Shields. Please, Maria Bielikova, Director of the Kempelen Institute for Intelligent Technologies. I took the liberty of not reintroducing Baroness because I think she was already known to you. Chris Lehan, welcome, Chief Global Affair Officer for OpenAI. Tom Hall, welcome, Vice President and General Manager of the National Legal Foundation. Over to you, Alicia.

Urvashi Aneja

Alicia Thank you and thank you to the UNICEF team and thank you for that very energizing opening. Yeah, I hope we can live up to that level of dynamism. oh yes can we invest the Baroness wants to know if we can invest in your company okay great so on that very cheerful note thank you all for being here and I’m delighted to be able to moderate this discussion at the India AI Impact Summit as someone who studies the governance choices that shape how technologies land in society I’m interested in a very simple test whether AI expands children’s agency and learning or does it quietly narrow it through design incentives and design choices so let’s begin with what we want AI to enable for children at scale and in practice Tom so perhaps I can start with you first Lego education has recently pushed into computer science and AI learning in young classrooms so what does AI literacy that supports well -being look like in real classrooms and what does it look like in real classrooms and what should we do if we want AI to deepen creativity rather than replace it

Tom Hall

Well, first of all, thank you for having me and very tough shoes to fill after Rahul’s spot there I agree with so much of what he just had to say and yeah, I’d love him to come and guide some more conversations Being at this conference, I think we can all see that the rate of technological advancement is breathtaking and I think often we stand, whether we’re deeply involved in it or on the sidelines there can be a feeling of incredible excitement there can also be a feeling of, frankly, doom that this change is happening so fast and I think that we kind of underestimate what the role of children is going to be in this journey They might look at what’s happening in the world of AI and simply see it as a magic box that they can interrogate at the click of a button and ask simple questions and get really quite deep answers back.

It might be a funny video they want to produce. It might be the answer to a history exam that they have to submit on Monday morning. And what we think AI literacy is, is ultimately handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood. And let’s understand all the components. So for us, AI literacy is allowing children and empowering them to really kind of interrogate the fundamental basis of computer science and artificial intelligence. And that’s teaching them how computers see the world as data, what is sensing, how to think about kind of predictability, how to think about bias and force conversations and accountability.

So we want to empower children to have deep thoughts about this. We also want to empower teachers. and I think right now again this pace is happening so fast we asked some primary and middle school teachers in the United States what they thought about the pace of artificial intelligence in classrooms and they’re all hugely excited about or a very high number of them are very excited about what’s happening they agree that artificial intelligence literacy needs to be a foundational skill in school but that’s 80 % of them see that only 41 % of them feel remotely ready to go and teach AI literacy in a classroom so I think we have to provide teachers with the tools that are going to allow them to bring real world learning to life

Urvashi Aneja

Thanks Tom and I would love to maybe at a later stage in the panel come back to you on the how because we do a lot of work with policy makers trying to do kind of capacity support with policy makers and we really struggle in terms of how do you actually embed AI into AI literacy so I imagine children can do that and I think that’s a really good point We really have to think about the pedagogy quite carefully to make sure that we are imparting that learning. So I’d love to kind of come back to you on that. Chris, if I can bring you in next. Open AI has emphasized that AI systems will increasingly support learning, creativity, and problem -solving for young people.

From your perspective, where do you see the most promising opportunities for AI to positively shape children’s experiences, particularly in ways that strengthen agency, curiosity, or access to knowledge? And you’re not allowed to say what Raoul already said.

Chris Lehane

I was just going to say you got a great explanation of that. First of all, thanks for having me. Awesome panel. Baroness, always good to be with you. My son would be very jealous that I’m sitting next to the Lego guy. That’s a pretty cool thing. So thank you. And I’ll just also share, I may have to exit a little bit early, because I have a question. I’m supposed to be at the date double scheduled, so if so, my apologies in advance. I’ll try to answer your question at a macro level and then maybe a more specific level that I think picks up on your pedagogy question that you were just asking. First of all, this technology has enormous capabilities to basically individualize teaching, individualize.

I mean, you’re at a place where every kid in the world could, in effect, have their own AI tutor that would be able to help them to learn at the pace that they learn and in ways that they learn. I think amongst, you know, sort of insights in education is kids just learn in very different ways. And this technology could be incredibly liberating in terms of answering that. You mentioned the teachers. We do work with the largest teachers union in the United States, 400 ,000 teachers, to actually train them to develop the AI to, in fact, do some of that individualized teaching. But I think there’s maybe a level down from that, which I think you were sort of picking up on when you were setting up this question.

And that’s the agency question. I know the U .S. public education system better than I know others around the world, so part of what I’ll say is really based on my U .S. experience. But the U .S. K -12, I see the sign, yes, you’re telling me to shut up, K -12 public education system was designed for the industrial age. It was basically designed to take kids who were coming from rural environments and the urban environments and teach them to be able to work in factories. That was both the bells, different classrooms that you would go to, the time that the day started, how long the school day lasted. But sort of at its core was not just literacy in terms of teaching people to read, write, do arithmetic.

It was actually creating an ethos about how you should work and participate in an industrial age economy. I do think one of the big issues that we’re going to need to think about with this particular technology, which is going to really reward people like Rahul who take agency, is how do we actually teach people? Agency. This technology is an incredibly… leveling technology, it scales the ability of anyone to think, to learn, to create, to build, to produce. And the question is, do you actually encourage people to be able to use it that way? Because if so, the way we think about the social contract relationship between capital and labor and how that is calibrated, this technology can have a huge impact on actually giving individuals the ability to control their own labor as owners of it.

Urvashi Aneja

Thanks, Chris. And I appreciate particularly the point around agency and how can we teach people agency. And I also wonder that sitting here in India, in the global south, one of the things that we can see very clearly is that agency in some sense is not only a factor of individual capacity, but has so much to do with the broader socioeconomic institutional context in which you are in. And so I wonder how we think about agency. Across different contexts. Back to you.

Thomas Davin

Thank you so much. let’s get into the next segment which is really about what happens when it fails, what happens when there’s harm that is being done from a UNICEF lens of course when we think of the education in the world today, 7 out of 10 children in classrooms cannot explain to us a text that they read at 10 years of age 7 out of 10, so clearly the technology potential is immense in really realizing huge bounds in learning outcomes what happens if actually we go the other way and we suddenly have an over dependency on that technology for children when we maybe frame the children’s creativity in ways that we actually constrain it or make it one one fits all, so let’s go into that segment of risks and harms and what is the accountability frameworks and how do we protect against this, for those of you who are following carefully I would say that the organizers of the panel have done a beautiful work on gender, I don’t know if you noticed but it’s boys on one side, girls on the other women asking questions to the men same question to the women.

They’re by definition much smarter. That’s pretty clear. And that’s exactly where I was going. And the next question to the women are going to be harder than before, as they should be. So let’s start on a curve. Yes. But to be fair, it continues to be harder and harder as the panel continues. Let’s start with Baroness Joanna. You’ve held UK government roles focused on Internet safety and harms. And you have helped build major child online safety coalitions internationally. From that experience, what is one key lesson from the UK Internet safety agenda that you believe is worthwhile surfacing today? And maybe one area where you think, or you should say, we’ve tried this. Please don’t do this.

Baroness Joanna Shields

If I could convey one thing. After 15 years of looking at how do we regulate technology to prevent harm, how do we regulate technology to prevent harm? I think it’s important to this post -harm paradigm that we’re operating in is not going to work in the AI. future. So we have had to adapt very quickly as governments as harms have emerged using AI. For instance, the deepfake crisis that we’ve experienced recently. I know six, seven jurisdictions of, you know, countries that have implemented very quickly, you know, laws that are specific and targeted to that particular harm. But what we need to do is we need to step back and we need to think about that how do we build and design safety from the ground up.

And my personal view is that this has to come through consultation with the companies. I see a very different type of reaction from the AI models developers. They’re much more receptive to the idea of safety by design and building in guardrails that protect children from the outset. And I’m actually an optimist at the moment because I’m starting to see a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually companies like Like OpenAI just recently announced that they have an age gate, age assurance technology to ensure that children under age, you know, whatever the jurisdiction is, I think it’s 18, okay, are not able to engage with the model and to experience, you know, that.

And I think that’s really important because, you know, we’ve been battling this age on the Internet for 15 years. And now the technology, whether it’s cryptography or biometrics, all kinds of technologies have emerged to where you can preserve privacy and ensure that you can protect privacy. So there’s no excuses anymore for companies not to build in robust age assurance that’s privacy preserving and that can ensure that the design experience that you get is appropriate for the age you are.

Thomas Davin

Thank you so much. So I love the point that social media, we talk a lot about social media these days, right? Rightfully so. But indeed, it’s been a late awakening worldwide about. the potential of that, but also the potential for what happens to children in many ways, and we cannot make that same mistake with AI. It’s just so much deeper and broader, and we need to look at this a lot more systematically. Maria, if I can come to you. Your work spans user modeling, personalization, as far as I understand it, and trustworthy AI, and you’ve also spoken publicly about disinformation risks. In your view, where do AI systems create the highest risk failure modes for children specifically, and what kind of technical evaluation should be required before deployment?

Maria Bielikova

on TikTok for 10 days in Germany, actually. And then we found out what happened. And maybe I can tell it in second, in second my entry that it was really shocked for us. Thank you so much.

Thomas Davin

So in essence, really having very clear impact focus research continuously so that can inform potential query mechanism and potential redress mechanism as a way to safeguard against those potential risks.

Maria Bielikova

And how they are exposed to commercial content. And this is the most critical.

Thomas Davin

Thank you so much.

Maria Bielikova

Even though we have Digital Service Act in Europe.

Thomas Davin

Thank you. Let’s move to the third segment.

Urvashi Aneja

Thanks. Yeah, and I think that brings us really nicely to this question of what next, what do we do? I think we often agree on what needs to be done at the level of principles, safety, transparency, accountability. I think you’ve added another dimension to it when you talk about, in some sense, evaluations, that we need to be doing kind of real -world evaluations in real -world deployment context of these systems, not just testing these systems in a lab setting, but testing, evaluating them in a real -world context. And regularly, I think the hard part, at least when we talk about the principles, things like safety, transparency, and accountability, is how we operationalize them across jurisdictions and also across business models, which I think also speaks to the point you were making around it being a feature and not a bug.

So this segment is really about the how, what becomes enforceable, what becomes measurable, and what changes incentives. Tom, if I can start with you again. As AI becomes more embedded in classrooms and in learning platforms, what governance or design choices are essential to ensure that these tools support children’s well -being at scale, particularly around diverse education systems and cultural contexts?

Tom Hall

thank you clearly this is a really uh exciting and uh high you know the potential of this moment in time is enormous so i think everyone should be ambitious uh but at the same time be measured um go in ambitious with your design plans for bringing ai into classrooms and see it as an as an opportunity to maybe make exponential gains in in many different markets where you may have been very challenged before i think there are tremendous opportunities for many markets in the global south right now so see the introduction of ai and ai literacy as something of a reset but you know don’t jump in blindfolded this is a once in a lifetime opportunity to establish essential foundational skills for young people and it’s going to need really careful thought these governance and design choices they’ve got to be built on no regret moves so i would say put data privacy data sovereignty and inclusion and respect for the student at the top of any plan When you sort of teach about, I don’t know, systemic bias and large language models in classrooms, make sure that all kids of all types of diversity and inclusions are represented and can see themselves coming back in the products that they’re experiencing.

Children have a lot to say in this space, so involve them. We’ve published a free AI policy toolkit for classrooms. Have children think about what kind of things they think need to be considered here. It’s going to be a really meaningful conversation between teacher and student. And talking of teachers, I think give them exciting but also relevant curriculum. We have computer science qualifications in the UK. The entry levels for that are critically low. And. Very low for girls. We introduced that 10 years ago. We gave very insufficient training for children. And the curriculum is frankly very dry. I think we have to really think about real world curriculum that is going to excite students. And so let them see themselves with real world problems in the types of learning experiences that we’re putting out there.

I’m speaking on behalf of the LEGO Group. So, you know, children are our role models. I think when you’re designing AI policies for children, this has to be sort of child -centered and child -led. And so just involve them in the plans as you roll them out. And I hope that will lead to some really exciting changes.

Urvashi Aneja

Thanks, Tom. Chris, earlier this year, OpenAI’s policy engagement has included calls for common -sense youth safety approaches and more parental control. So what, in your view, should be the baseline governance package for child -facing AI, and what should be globally interoperable versus what is locally set?

Chris Lehane

Sure. Thank you for the question. And let me just give two points, and then I’ll answer that question specifically. First of all, and I think this is a really smart room, so I’m sure we’re all thinking about it this way, but really important to understand and recognize that this is not social. This is not social media, and we should not make the classic mistake of fighting the last war with the next. next war. There are certainly lessons that are important that you take from it, but understanding that this is going to be a technology that is not just on your device, but is going to be around you in all sorts of different ways, physical world, non -physical world.

So understanding that component. I think secondly, interesting lessons from what we’ve seen on the catastrophic harm side. You’ve seen the emergence of AI safety institutes around the world where the leading frontier labs, for the most part, work with those safety institutes to basically be creating safety standards. UK, US, Europe, Japan, Australia, you’ve seen an early version of that. Here in India, and I do wonder whether there’s some version of that that you actually do specifically for kids’ safety. The third point really goes to your question, which is, yeah, we have put forth, and we’re really the only AI company that has done this thus far. We do hope others will join us. Basically, a multi -pronged approach.

The first, and the Baroness mentioned this, is we do do age assurance. We try to use signals to identify whether you’re under 18 or not. If we identify you as under 18 and if we are unable to identify you, we then default you to an under 18 model. So even if we’re not sure of your age, we do default you to an under 18 model, which has all sorts of restrictions around violence and sexual conversations and mental health type of issues. Three, we build it in with a ton of parental controls. Parents can control whether it has memory or not about your child. Parents can get real -time feedback. Parents can control how long you’re spending on it. You can get warnings and alerts around stuff if your child is asking stuff that would be in the mental health types of space.

Four, we prohibit any targeted advertising of kids using the technology. I think that’s one that’s a clear lesson from the social media age. Fifth, we have an outside review process that we’ve called for. In the U .S., that would be done by like a state attorney general, but someone who’s a part of government to actually review that what you’re saying you in fact are doing. And then finally, prohibit the targeting of specific kids bots. There may come a time and place when we actually have really good guardrails around this, and they can really serve really helpful, positive, productive purposes. But until we have those guardrails, we think we need to be really, really, really mindful of that.

So it is a complete package. We are pushing this in California and a number of states. We want to take it around the world. We’re working with some of the leading children’s advocacy organizations. And anyone here who would want to work with us on it, we really welcome that. And we don’t pretend to have all the answers. Like we’re super humble about this. We do think this is what we’ve seen from our data. This makes a lot of sense. It goes farther than what others have done. But we also know that this is going to be a constant learning process, and this is a beginning, not even the middle, and certainly not the end.

Urvashi Aneja

Sorry, just to ask a follow -up question on the bit around how you make this locally relevant. So you have this kind of package, you’re rolling it out in the U .S. How do you then cater it to different contexts?

Chris Lehane

You know, it’s a great question. Like there are some parts of the world, you know, Europe is an example of this, where there are some privacy limitations that actually impact your ability to do the age assurance at the level that you would like to be able to do it at. So we’re in the process of some of these jurisdictions of trying to work through some of those types of issues. I think there’s other dynamics that potentially come into play, which may be what you’re asking about, you know, cultural context, societal context. And I think those are things that you do have to work through with individual countries because individual countries are going to have their own norms on those.

And I think we’ll also see different levels of vulnerability or different types of vulnerabilities in those different contexts.

Urvashi Aneja

Fair enough. If I can bring you in. How should global norms for children’s safety handle cultural and regulatory diversity without creating, in some sense, loopholes that allow companies then to opt for the weakest protection?

Baroness Joanna Shields

So I wanted to take that question in two different directions. First of all, in terms of a global regulatory framework, there are certain standards that are required across every jurisdiction. I mean, every country has an age where children can participate in the digital world. And unfortunately, it’s a blunt instrument in many cases. It applies across the board at a certain age. We’ve been seeing a lot of social media bans recently. And I think that has come out of exasperation on the part of governments, the fact that they just have. They’ve given up trying to regulate this technology, and they’ve decided they’re going to just use that blunt instrument as a guide. And unfortunately, you know, there are benefits then that the children can’t participate in.

But the reality is that this, there’s a little bit of movement here. As the, you know, as the age of assurance technology grows and becomes much more capable, we can custom design experiences for young people that accommodate their level of maturity and capability and ensure that we can meet these requirements in a much more sophisticated and better way. It’s about time we solve for age online once and for all. And I believe we’re getting close to that. There’s an organization called the Open Age Alliance. And it’s a very important organization that’s looking to harmonize standards across all of, age assurance technology. So whatever age assurance you think in your platform is reliable, Open Age will enable you to generate an age key.

And then that age key travels with. the child everywhere they go online. So we’ve got a very absolutely verifiable way for companies to deliver an age -appropriate experience. And you asked me about something else that I think is really important in this context about culture. And if we have a world where we are accepting models from just the global north, I really believe we will lose so much of our cultural diversity, our uniqueness as people, wherever we come from, whatever our background is. We have to be very mindful of the fact that we don’t want to develop a monoculture that is based on a handful of models that everybody uses around the world, and we lose that richness of who we are, what makes us human.

I think that that wasn’t… really the aim of the question, but I couldn’t let it go without bringing that to bear, because this is an absolutely critical question we need to solve. as society.

Urvashi Aneja

Thank you. I couldn’t agree more on both those points on how we have to get the age, we have to solve for age verification and then the risk of kind of flattening culture and what that means for children and what that means for how they develop and grow. Maria, last but not least, you’ve helped elevate trustworthy AI as a public agenda in Slovakia and in Europe through initiatives spotlighting responsible practices. So if a regulator asks you for key measures or measurable indicators that an AI system is acting in a child’s best interest, what would those be?

Maria Bielikova

Actually, I already mentioned it somehow that it’s something that AI at this moment is so complex, meaning I mean the neural networks that we have that we we cannot actually measure something that we don’t know. We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it even though they seem the best because even though they tell that children are not profiled but they are because we see it and sometimes it’s out of their control because we should really make such studies as I mentioned because for example one of the results of a study I mentioned before is that children see less formal advertisement on TikTok.

This is fine but actually they are exposed five times more … to profiling to the topics with influencers and so on. They are not formal advertisements. So we definitely should do a lot of such studies. And the children should be there because if we prohibit everything for them until some age, then they will not be able to explore it. It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment. And this is probably the most important to doing such studies to really understand what is going on on the platform where they are because they will be there.

Urvashi Aneja

I think that’s such a powerful that’s such a powerful analogy, the city one and I think while you were speaking what struck me is that you know we have some tools already we don’t have to kind of approach this afresh so we have actually tools around data protection and privacy if we actually enforce them some of that profiling that you’re talking about need not happen we have tools that allow us to get data from the platforms to actually understand what is happening on these systems so again we have things in our kind of regulatory toolbox that we can exercise and then of course I think in addition to that really this point around contextual evaluations that involve children is so that we can understand what these systems are actually doing Thomas maybe I can hand it back to you to or did you want to add something?

Tom Hall

Thomas is my formal name so I thought you were talking

Urvashi Aneja

oh right if you would like to add something and then you can hand it on to the other Would you want something? Other Thomas.

Tom Hall

A lady said something I thought was very wise to me this morning and said, you know, you’ve got to think about what kind of ancestor you want to be. And I guess we’re at this really interesting moment where we’ve had social media, we’ve had sugar, we’ve had tobacco. Surely now this is our chance to make some really sharp decisions and pay it forward for the next generation. So credit to the lady who said that to me this morning.

Thomas Davin

Thank you so much, Tom. So it’s going to be very hard to close, so maybe I’ll just try to see at least the points that I took from the panel and hopefully they will resonate. I come away with a sense of, I would say it’s going to sound terribly UN, but measured optimism. One, because the potential is tremendous. We are all aware of that. The potential, at least from a UNICEF lens, on really changing outcomes for children in ways we have never been able to do before is huge. And I think that’s something that we can all be proud of. and the risks are equally tremendously important and potentially will be there for decades if we don’t craft, design it right.

To my mind, there may be three directions that I heard that we are going in the right direction. One is safety by design has to be a must. That’s about age appropriateness. It’s about data privacy. It’s about child rights at the heart. It’s about appropriate content for the right age. It’s about systematic impact measurement. I was struck, Tom, in your session this morning when you were talking about, you know, if we have a model that actually gives the right answer or an answer to children all the time, they might actually lose their sense of curiosity. And I never thought about it like this. What huge loss would that be for humanity if we suddenly have children who are just no more curious because they just ask whatever question?

Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know… that grit is going to be one of the huge skills? of tomorrow. So those things are going to be massively important. Redress mechanism, we don’t talk about this and how we enforce those redress mechanism when things go wrong is also there. The second layer in my mind would be inclusion by default, coming back to Baroness’s point about having a monoculture under risk of this and we know that some of that is already playing out and hopefully having a summit in India is one of the turning points where we can see actually this turning around a little bit where we really have so many more countries beyond the global north creating shaping what those solutions are, having representation of regions of language of different dialects but also children with disabilities which are quite often left as we know out of those out of time.

And maybe one thing that we haven’t really talked about is having solutions that work for the unconnected, having solutions that work offline. We are at risk of just focusing on urban centered people and that will be terrible if we don’t do it right to those who are already kind of struggling by the wayline. And last but not least is children at the heart. And children at the heart because that’s who we want to create that world for, the ancestors we want to be for them but also because Raoul demonstrated that for us, they are the most effective users of that and the ones that have the ability to tell us this works for me, this doesn’t work for me and they should be not just divorced but they should be part of the governance of those mechanisms.

That starts with AI literacy in schools, it starts with also helping parents having the ability to help their children know where to get that literacy and hopefully if we hit all of these right we have a chance.

Urvashi Aneja

Thank you all for joining us. I just want to give the floor back to the MC.

Moderator

Thank you so much to the panelists as well as the moderators and the audience. Also on behalf of Undersecretary General Amandeep Gill, United Nations Special Envoy for Digital and Emerging Technologies. who regrets missing the session as he is stranded with the Secretary General’s program. Even the United Nations motorcade cannot make it through Delhi traffic. But could we please welcome Rahul back up to the stage for a very brief reflection on the discussion?

Rahul John Aju

I’ll make sure it’s brief. First of all, guys, can we have a big glass for them? That was not enough. If you don’t realize, these are the main people who designed the future for us kids. And the fact that I got an opportunity here to speak, thank you again, UN for that. Thank you, AI Summit for that. And whatever they said is very true. You know why? Because at this age, specifically us kids, the policies that are designed, when we are building these AI tools, that should be the first thought of keeping kids in mind, not an afterthought, right? And the fact that that’s happening is good, right? Because from Lego to open AI to all these big places to ma ‘am, everyone here, they’re designing the next world.

and I just want to say a big, big, big thank you and I also want to add one last thing. Thank you so much for always talking about me also in between but more than that, for listening to us kids, you know, for not just having, thinking what we need, for putting our opinion in mind also while building this. So a big thank you from all the children out there. Thank you

Moderator

Excellencies and distinguished guests, thank you for your participation and engagement. We appreciate the insights shared today and look forward to continued discussion on the responsible advancement of AI. The session is now concluded. Thank you. Thank you. Thank you audience. May I request the session officers to please come to the stage. May I request the audience to exit from the door behind us. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Baroness Joanna Shields
5 arguments149 words per minute1123 words449 seconds
Argument 1
Post-harm regulatory models from social media are inadequate for AI; need safety by design from the outset
EXPLANATION
The reactive regulatory approach used for social media, where regulations are implemented after harm occurs, is not suitable for AI governance. AI requires proactive safety measures built into the design from the beginning rather than responding to damage after it happens.
EVIDENCE
References to deepfake crisis requiring quick legislative responses in six or seven jurisdictions, and mentions that AI developers are more receptive to safety by design approaches than previous technology companies
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
AGREED WITH
Chris Lehane
Argument 2
AI creates simulated intimacy that children cannot distinguish from authentic human connection, requiring specific protections
EXPLANATION
AI systems create one-to-one adaptive interactions that simulate human-like intimacy at scale, which children cannot reliably distinguish from real human connection. This has implications for safety, mental health, identity formation, and long-term well-being.
EVIDENCE
Examples of emotional dependency, manipulation, deepfake abuse, and devastating loss cases; quote ‘When a model says to a child, I care, I understand, that’s not conscience, that’s code’
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
Argument 3
Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences
EXPLANATION
Robust age verification systems using technologies like cryptography or biometrics can now preserve privacy while ensuring children receive age-appropriate experiences. There are no longer excuses for companies not to implement these systems.
EVIDENCE
References OpenAI’s recent announcement of age gate technology; mentions Open Age Alliance working to harmonize age assurance standards and create portable age keys
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
AGREED WITH
Chris Lehane
DISAGREED WITH
Maria Bielikova
Argument 4
Global standards needed for certain protections while allowing cultural adaptation in implementation
EXPLANATION
While certain baseline protections for children should be universal across jurisdictions, implementation can be adapted to local cultural and regulatory contexts. However, there’s a risk of companies choosing the weakest protection standards.
EVIDENCE
Notes that every country has age requirements for digital participation; mentions privacy limitations in Europe affecting age assurance capabilities
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
Argument 5
Risk of cultural homogenization if AI models primarily reflect global north perspectives
EXPLANATION
If the world accepts AI models developed primarily in the global north, there’s a risk of losing cultural diversity and uniqueness. This could create a monoculture that diminishes the richness of human diversity.
EVIDENCE
Warns against accepting models from ‘just the global north’ and losing ‘cultural diversity, our uniqueness as people, wherever we come from, whatever our background is’
MAJOR DISCUSSION POINT
Risks and Accountability in AI Systems
R
Rahul John Aju
4 arguments175 words per minute1914 words656 seconds
Argument 1
Children should learn foundational skills first before using AI tools, similar to learning basic math before using calculators
EXPLANATION
Just as students learn basic mathematics before being allowed to use calculators in exams, children should master fundamental skills like writing essays and other basics before using AI assistance. This ensures they understand the foundations before leveraging artificial intelligence.
EVIDENCE
Personal example of getting calculator access only after learning basic math; advocates for learning ‘natural intelligence first, then artificial intelligence’
MAJOR DISCUSSION POINT
AI Literacy and Educational Integration
AGREED WITH
Tom Hall
DISAGREED WITH
Chris Lehane
Argument 2
Schools should teach ‘how to think’ critically rather than ‘what to think’
EXPLANATION
Current education systems focus on teaching students what to think, but they should instead focus on teaching critical thinking skills, communication, and how to handle failures. These foundational thinking skills are essential in the AI era.
EVIDENCE
Personal anecdote about father encouraging him to ‘question everything’ and be critical; mentions learning to communicate and handle failures as essential skills
MAJOR DISCUSSION POINT
AI Literacy and Educational Integration
Argument 3
Children must be included in governance mechanisms and policy design, not just as subjects but as participants
EXPLANATION
Children should be active participants in designing AI policies and governance frameworks that affect them, rather than just being the subjects of adult decision-making. Their perspectives and experiences are essential for creating effective protections.
EVIDENCE
Created Rescue AI tool to help people understand terms and conditions; taught AI courses to 7 lakh (700,000) people for free, demonstrating children’s capability to engage with and teach AI concepts
MAJOR DISCUSSION POINT
Children’s Agency and Participation
AGREED WITH
Tom Hall, Thomas Davin, Moderator
Argument 4
Children have demonstrated capability to learn and engage with AI when content is delivered appropriately
EXPLANATION
When AI education is delivered in the right way that matches different learning styles, children show strong demand and capability to learn. The key is adapting delivery methods to individual preferences rather than using one-size-fits-all approaches.
EVIDENCE
Personal example of preferring visual learning through videos; created free AI courses that attracted 7 lakh (700,000) learners; mentions tools like Notebook LM and StudyFetch that convert content into different formats
MAJOR DISCUSSION POINT
Children’s Agency and Participation
AGREED WITH
Chris Lehane, Tom Hall
T
Tom Hall
5 arguments163 words per minute927 words340 seconds
Argument 1
AI literacy means giving children tools to understand what’s ‘under the hood’ rather than treating AI as a magic box
EXPLANATION
Instead of allowing children to see AI as a mysterious magic box that provides answers, AI literacy should empower them to understand the fundamental components and workings of AI systems. This includes understanding how computers process data, predictability, bias, and accountability.
EVIDENCE
Uses metaphor of ‘handing children a screwdriver’ to take apart and understand complex systems; mentions teaching concepts like how computers see the world as data, sensing, bias, and accountability
MAJOR DISCUSSION POINT
AI Literacy and Educational Integration
AGREED WITH
Rahul John Aju
Argument 2
Teachers need proper training and support as only 41% feel ready to teach AI literacy despite 80% seeing it as foundational
EXPLANATION
While most teachers recognize AI literacy as a fundamental skill that needs to be taught in schools, less than half feel prepared to actually deliver this education. This gap between recognition and readiness needs to be addressed through proper teacher training and curriculum development.
EVIDENCE
Survey data showing 80% of US primary and middle school teachers believe AI literacy should be foundational, but only 41% feel ready to teach it
MAJOR DISCUSSION POINT
AI Literacy and Educational Integration
Argument 3
Child-centered and child-led design approaches are essential for AI policies affecting children
EXPLANATION
AI policies and educational approaches for children should be designed with children at the center and should involve children in the design process. Children should be seen as partners in creating solutions rather than just recipients of adult-designed policies.
EVIDENCE
References LEGO Group’s philosophy that ‘children are our role models’; mentions publishing a free AI policy toolkit for classrooms that encourages children to think about governance considerations
MAJOR DISCUSSION POINT
AI Literacy and Educational Integration
AGREED WITH
Rahul John Aju, Thomas Davin, Moderator
Argument 4
Data privacy, sovereignty, and inclusion must be foundational ‘no regret moves’ in AI implementation
EXPLANATION
When implementing AI in educational settings, certain principles should be non-negotiable foundational elements. These include protecting student data privacy, ensuring data sovereignty, and making sure all types of children can see themselves represented in AI products and experiences.
EVIDENCE
Mentions ensuring ‘all kids of all types of diversity and inclusions are represented and can see themselves coming back in the products’; emphasizes putting ‘data privacy data sovereignty and inclusion and respect for the student at the top of any plan’
MAJOR DISCUSSION POINT
Implementation and Practical Measures
Argument 5
Need for exciting, relevant curriculum that connects to real-world problems rather than dry technical content
EXPLANATION
AI education curriculum should be engaging and connected to real-world applications rather than being dry and technical. Current computer science education suffers from low enrollment, particularly among girls, partly due to insufficient training and unengaging curriculum.
EVIDENCE
Points to UK computer science qualifications having ‘critically low’ entry levels, especially for girls; attributes this to ‘very insufficient training’ and ‘frankly very dry’ curriculum introduced 10 years ago
MAJOR DISCUSSION POINT
Implementation and Practical Measures
AGREED WITH
Chris Lehane, Rahul John Aju
C
Chris Lehane
4 arguments191 words per minute1316 words412 seconds
Argument 1
AI can enable personalized learning experiences that adapt to different learning styles and paces
EXPLANATION
AI technology has the capability to provide individualized tutoring for every child in the world, allowing them to learn at their own pace and in ways that match their individual learning styles. This could be tremendously liberating for education by addressing the reality that children learn in very different ways.
EVIDENCE
Mentions working with the largest teachers union in the US (400,000 teachers) to train them in developing AI for individualized teaching
MAJOR DISCUSSION POINT
AI Literacy and Educational Integration
AGREED WITH
Tom Hall, Rahul John Aju
DISAGREED WITH
Rahul John Aju
Argument 2
Current education systems designed for industrial age need updating to teach agency in AI era
EXPLANATION
The US K-12 education system was designed for the industrial age to prepare students for factory work, but the AI era requires teaching students to take agency and control their own learning and labor. The technology rewards people who take initiative and can scale individual capabilities.
EVIDENCE
Describes how US education system was designed to move people from rural to urban environments for factory work, including the structure of bells, different classrooms, and daily schedules
MAJOR DISCUSSION POINT
Children’s Agency and Participation
Argument 3
AI has potential to be a leveling technology that scales individual ability to think, learn, and create
EXPLANATION
AI can democratize access to enhanced cognitive capabilities, allowing individuals to amplify their ability to think, learn, create, build, and produce. This could fundamentally change the relationship between capital and labor by giving individuals more control over their own work.
EVIDENCE
References people like Rahul who demonstrate taking agency with technology; suggests this could impact ‘the social contract relationship between capital and labor’
MAJOR DISCUSSION POINT
Children’s Agency and Participation
Argument 4
Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
EXPLANATION
OpenAI has implemented a comprehensive child safety package that includes age assurance technology, defaulting to under-18 models when age is uncertain, extensive parental controls, real-time feedback systems, and prohibition of targeted advertising to children. This represents the most comprehensive approach in the industry.
EVIDENCE
Details specific measures: age assurance using signals, defaulting to under-18 models, parental memory controls, real-time feedback, mental health warnings, no targeted advertising, outside review process, and prohibition of specific kids bots
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
AGREED WITH
Baroness Joanna Shields
DISAGREED WITH
Maria Bielikova
M
Maria Bielikova
4 arguments129 words per minute310 words143 seconds
Argument 1
Children are exposed to hidden profiling and commercial content through AI systems, even when formal advertising is restricted
EXPLANATION
Research shows that while children may see fewer formal advertisements on platforms like TikTok, they are actually exposed to five times more profiling through influencer content and other non-formal advertising methods. This hidden commercial targeting is often outside the control of the platforms themselves.
EVIDENCE
Study results showing children see less formal advertising on TikTok but are exposed 5x more to profiling through influencers; mentions 10-day study conducted in Germany
MAJOR DISCUSSION POINT
Risks and Accountability in AI Systems
Argument 2
Current AI systems are too complex to fully measure or understand, requiring observational studies
EXPLANATION
Neural networks and AI systems are so complex that we cannot measure what we don’t fully understand. Instead of relying solely on analytics provided by companies, independent observational studies are crucial to understand what is actually happening on platforms where children are present.
EVIDENCE
Emphasizes need for studies ‘not just taking analytics from companies’ and mentions conducting independent research to observe actual platform behavior
MAJOR DISCUSSION POINT
Risks and Accountability in AI Systems
AGREED WITH
Urvashi Aneja, Thomas Davin
DISAGREED WITH
Chris Lehane
Argument 3
Children should not be prohibited from exploring AI environments but need guidance and protection
EXPLANATION
Rather than completely prohibiting children from AI environments until a certain age, they should be allowed to explore these digital spaces with proper guidance and protection. The approach should be similar to allowing children to explore a city – not banning them but ensuring they have proper supervision and safety measures.
EVIDENCE
Uses analogy of not prohibiting children from going to the city but ensuring we ‘travel with them through this environment’ and ‘know what is going on’
MAJOR DISCUSSION POINT
Risks and Accountability in AI Systems
DISAGREED WITH
Baroness Joanna Shields
Argument 4
Real-world evaluation studies are essential to understand actual platform impacts on children
EXPLANATION
To truly understand how AI systems affect children, we need continuous real-world evaluation studies that observe actual behavior and impacts on platforms where children are active. These studies should be independent and not rely solely on company-provided analytics.
EVIDENCE
Mentions conducting studies on platforms like TikTok and finding unexpected results about profiling and commercial content exposure
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
T
Thomas Davin
4 arguments175 words per minute1227 words419 seconds
Argument 1
Over-dependency on AI could reduce children’s curiosity and critical thinking abilities
EXPLANATION
If AI systems always provide correct answers to children’s questions, there’s a risk that children might lose their natural curiosity and ability to struggle with problems. This could result in a significant loss for humanity if children no longer develop grit and persistence through challenging experiences.
EVIDENCE
Poses hypothetical question about designing AI models that give wrong answers on purpose so children still struggle and develop grit, recognizing this as a crucial skill for the future
MAJOR DISCUSSION POINT
Risks and Accountability in AI Systems
Argument 2
Children’s perspectives are essential for understanding what works and what doesn’t in AI systems
EXPLANATION
Children should be at the heart of AI governance not just as beneficiaries but as active participants who can provide feedback on what works and what doesn’t. They are often the most effective users of technology and can guide adults in creating better systems.
EVIDENCE
References Rahul’s demonstration of children’s capabilities and emphasizes that children ‘should be part of the governance of those mechanisms’
MAJOR DISCUSSION POINT
Children’s Agency and Participation
AGREED WITH
Rahul John Aju, Tom Hall, Moderator
Argument 3
Solutions must work for unconnected populations and offline contexts, not just urban centers
EXPLANATION
AI solutions for children must be designed to work for those who are not connected to the internet and in offline contexts. There’s a risk of focusing only on urban, connected populations while leaving behind those who are already struggling and marginalized.
EVIDENCE
Warns against ‘focusing on urban centered people’ and emphasizes need for solutions that ‘work for the unconnected, having solutions that work offline’
MAJOR DISCUSSION POINT
Implementation and Practical Measures
Argument 4
Continuous impact measurement and redress mechanisms are necessary when AI systems fail
EXPLANATION
AI systems affecting children need systematic impact measurement and clear redress mechanisms for when things go wrong. This includes having enforceable ways to address harms and ensure accountability when AI systems fail to protect children.
EVIDENCE
Mentions ‘systematic impact measurement’ and ‘redress mechanism, we don’t talk about this and how we enforce those redress mechanism when things go wrong’
MAJOR DISCUSSION POINT
Implementation and Practical Measures
AGREED WITH
Maria Bielikova, Urvashi Aneja
M
Moderator
2 arguments104 words per minute338 words193 seconds
Argument 1
Discussions about children and technology should include children as participants rather than just talking about them
EXPLANATION
Too often, conversations about children and technology are conducted without meaningful participation from children themselves. This session is intentionally designed to include children’s voices and perspectives rather than just having adults discuss what children need.
EVIDENCE
Introduction of Rahul John Aju as a featured young AI innovator who has built real-world AI tools and founded his own startup
MAJOR DISCUSSION POINT
Children’s Agency and Participation
AGREED WITH
Rahul John Aju, Tom Hall, Thomas Davin
Argument 2
The question is not whether children will engage with AI, but whether adults and institutions are prepared to guide that engagement responsibly
EXPLANATION
Children’s engagement with AI is inevitable, so the focus should shift from preventing that engagement to ensuring that adults, institutions, and systems are adequately prepared to provide responsible guidance and support.
EVIDENCE
Reference to Rahul’s demonstration of children’s capabilities and the need for responsible guidance systems
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
U
Urvashi Aneja
4 arguments169 words per minute1080 words382 seconds
Argument 1
AI’s impact on children depends on whether it expands or narrows their agency through design choices and incentives
EXPLANATION
The key test for AI systems affecting children is whether they enhance children’s agency, learning, and capabilities or whether they subtly constrain these through the way they are designed and the incentives built into them. This framing focuses on the governance choices that shape how technologies are implemented in society.
EVIDENCE
References her work studying governance choices that shape how technologies land in society
MAJOR DISCUSSION POINT
Children’s Agency and Participation
Argument 2
Agency is not only about individual capacity but is shaped by broader socioeconomic and institutional contexts
EXPLANATION
While individual capacity matters for agency, the broader socioeconomic and institutional environment significantly influences whether people can exercise agency. This is particularly visible in global south contexts where structural factors play a major role.
EVIDENCE
References the perspective from India and the global south where contextual factors clearly influence individual agency
MAJOR DISCUSSION POINT
Children’s Agency and Participation
Argument 3
Existing regulatory tools like data protection and privacy laws should be enforced to address current AI harms
EXPLANATION
Rather than starting from scratch, we already have regulatory tools around data protection and privacy that, if properly enforced, could address many current issues like profiling of children. We also have mechanisms to obtain data from platforms to understand what’s happening on these systems.
EVIDENCE
References existing data protection frameworks and platform data access mechanisms that are underutilized
MAJOR DISCUSSION POINT
Implementation and Practical Measures
Argument 4
Contextual evaluations involving children are essential to understand real-world AI system impacts
EXPLANATION
To truly understand how AI systems affect children, we need evaluations that take place in real-world contexts and actively involve children in the evaluation process. This goes beyond laboratory testing to understand actual deployment impacts.
EVIDENCE
References Maria’s research on real-world platform studies that revealed unexpected findings about profiling and commercial content
MAJOR DISCUSSION POINT
AI Governance and Child Safety Framework
AGREED WITH
Maria Bielikova, Thomas Davin
Agreements
Agreement Points
Children should be active participants in AI governance and policy design, not just subjects of adult decision-making
Speakers: Rahul John Aju, Tom Hall, Thomas Davin, Moderator
Children must be included in governance mechanisms and policy design, not just as subjects but as participants Child-centered and child-led design approaches are essential for AI policies affecting children Children’s perspectives are essential for understanding what works and what doesn’t in AI systems Discussions about children and technology should include children as participants rather than just talking about them
All speakers agreed that children should have meaningful participation in designing AI systems and policies that affect them, recognizing children as capable partners rather than passive recipients of adult decisions
AI education should focus on foundational skills and critical thinking before introducing AI tools
Speakers: Rahul John Aju, Tom Hall
Children should learn foundational skills first before using AI tools, similar to learning basic math before using calculators AI literacy means giving children tools to understand what’s ‘under the hood’ rather than treating AI as a magic box
Both speakers emphasized that children need to understand fundamentals and develop critical thinking skills before using AI assistance, comparing it to learning basic math before using calculators
Safety by design is essential and cannot rely on post-harm regulatory approaches
Speakers: Baroness Joanna Shields, Chris Lehane
Post-harm regulatory models from social media are inadequate for AI; need safety by design from the outset Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Both speakers agreed that proactive safety measures must be built into AI systems from the beginning, rather than reacting to harm after it occurs, as was done with social media
Age assurance and age-appropriate experiences are crucial for child safety in AI systems
Speakers: Baroness Joanna Shields, Chris Lehane
Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Both speakers emphasized the importance of robust age verification systems that can deliver age-appropriate AI experiences while preserving privacy
Real-world evaluation and continuous monitoring of AI systems affecting children is necessary
Speakers: Maria Bielikova, Urvashi Aneja, Thomas Davin
Current AI systems are too complex to fully measure or understand, requiring observational studies Contextual evaluations involving children are essential to understand real-world AI system impacts Continuous impact measurement and redress mechanisms are necessary when AI systems fail
All three speakers agreed that independent, real-world studies and continuous monitoring are essential to understand how AI systems actually affect children in practice
AI has tremendous potential to personalize and improve learning experiences for children
Speakers: Chris Lehane, Tom Hall, Rahul John Aju
AI can enable personalized learning experiences that adapt to different learning styles and paces Need for exciting, relevant curriculum that connects to real-world problems rather than dry technical content Children have demonstrated capability to learn and engage with AI when content is delivered appropriately
All speakers recognized AI’s potential to revolutionize education by providing personalized, engaging learning experiences that adapt to individual children’s needs and learning styles
Similar Viewpoints
Both speakers expressed concern about AI systems potentially creating or reinforcing inequalities, whether through cultural homogenization or structural barriers to agency
Speakers: Baroness Joanna Shields, Urvashi Aneja
Risk of cultural homogenization if AI models primarily reflect global north perspectives Agency is not only about individual capacity but is shaped by broader socioeconomic and institutional contexts
Both speakers advocated for balanced approaches that allow children to engage with AI while maintaining their natural curiosity and critical thinking skills
Speakers: Maria Bielikova, Thomas Davin
Children should not be prohibited from exploring AI environments but need guidance and protection Over-dependency on AI could reduce children’s curiosity and critical thinking abilities
Both speakers emphasized the importance of inclusive design that considers diverse populations and contexts, ensuring AI benefits reach all children regardless of their circumstances
Speakers: Tom Hall, Thomas Davin
Data privacy, sovereignty, and inclusion must be foundational ‘no regret moves’ in AI implementation Solutions must work for unconnected populations and offline contexts, not just urban centers
Unexpected Consensus
Industry willingness to implement proactive child safety measures
Speakers: Baroness Joanna Shields, Chris Lehane
Post-harm regulatory models from social media are inadequate for AI; need safety by design from the outset Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
There was unexpected consensus between a government official and an industry representative about the need for proactive safety measures, with the industry representative actually advocating for stricter standards than typically expected from companies
Children’s capability to engage meaningfully with AI governance
Speakers: Rahul John Aju, Tom Hall, Thomas Davin, Moderator
Children must be included in governance mechanisms and policy design, not just as subjects but as participants Child-centered and child-led design approaches are essential for AI policies affecting children Children’s perspectives are essential for understanding what works and what doesn’t in AI systems Discussions about children and technology should include children as participants rather than just talking about them
There was remarkable consensus across all speakers, including industry, government, and academic representatives, about children’s capacity to meaningfully participate in AI governance – a view that challenges traditional paternalistic approaches to child protection
Overall Assessment

The speakers demonstrated strong consensus on key principles: children should be active participants in AI governance, safety must be built into systems from the start, personalized learning has great potential, and real-world evaluation is essential. There was also agreement on the need for inclusive design and balanced approaches that protect children while preserving their agency and curiosity.

High level of consensus across diverse stakeholders (government, industry, academia, and youth representatives) suggests a mature understanding of the challenges and opportunities. This alignment creates a strong foundation for collaborative action on AI governance for children, though implementation details may still require negotiation across different jurisdictions and contexts.

Differences
Different Viewpoints
Approach to children’s access to AI systems – prohibition vs guided exploration
Speakers: Baroness Joanna Shields, Maria Bielikova
Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences Children should not be prohibited from exploring AI environments but need guidance and protection
Baroness Shields advocates for robust age verification and age-appropriate restrictions, while Maria Bielikova argues against complete prohibition, favoring guided exploration with supervision similar to allowing children to explore a city
Timing of AI introduction in education – foundations first vs early integration
Speakers: Rahul John Aju, Chris Lehane
Children should learn foundational skills first before using AI tools, similar to learning basic math before using calculators AI can enable personalized learning experiences that adapt to different learning styles and paces
Rahul advocates for learning natural intelligence and foundational skills before using AI, while Chris emphasizes AI’s immediate potential for individualized tutoring and learning enhancement
Source of evaluation data – company analytics vs independent research
Speakers: Chris Lehane, Maria Bielikova
Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children Current AI systems are too complex to fully measure or understand, requiring observational studies
Chris presents OpenAI’s comprehensive safety package based on their data and approach, while Maria emphasizes the need for independent observational studies rather than relying on company-provided analytics
Unexpected Differences
Cultural homogenization vs global standards
Speakers: Baroness Joanna Shields, Baroness Joanna Shields
Global standards needed for certain protections while allowing cultural adaptation in implementation Risk of cultural homogenization if AI models primarily reflect global north perspectives
Unexpectedly, the same speaker (Baroness Shields) presents seemingly contradictory positions – advocating for global standards while simultaneously warning against cultural homogenization from global north dominance
AI as enhancement vs potential harm to natural development
Speakers: Chris Lehane, Thomas Davin
AI has potential to be a leveling technology that scales individual ability to think, learn, and create Over-dependency on AI could reduce children’s curiosity and critical thinking abilities
Unexpected tension between viewing AI as an empowering, democratizing force versus concern that it might diminish essential human capabilities like curiosity and grit
Overall Assessment

The discussion revealed moderate disagreements primarily around implementation approaches rather than fundamental goals. Key tensions emerged between protective vs exploratory approaches to children’s AI access, timing of AI integration in education, and reliance on industry vs independent evaluation

Moderate disagreement with constructive tension. Most speakers shared common goals of child safety and empowerment but differed on methods. The disagreements reflect healthy debate about balancing protection with agency, and suggest the field is still developing best practices rather than having fundamental philosophical divides

Partial Agreements
Both agree on the need for age verification and child protection measures, but differ on implementation approaches – Baroness focuses on industry-wide standards through Open Age Alliance, while Chris presents OpenAI’s specific proprietary solution
Speakers: Baroness Joanna Shields, Chris Lehane
Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Both agree children should be central to AI policy design, but Tom emphasizes child-centered design in educational products while Rahul focuses on children as active participants in governance and policy-making processes
Speakers: Tom Hall, Rahul John Aju
Child-centered and child-led design approaches are essential for AI policies affecting children Children must be included in governance mechanisms and policy design, not just as subjects but as participants
Both agree on the need for systematic evaluation and measurement, but Thomas emphasizes redress mechanisms for when systems fail while Maria focuses on ongoing observational studies to understand current impacts
Speakers: Thomas Davin, Maria Bielikova
Continuous impact measurement and redress mechanisms are necessary when AI systems fail Real-world evaluation studies are essential to understand actual platform impacts on children
Takeaways
Key takeaways
Post-harm regulatory models from social media are inadequate for AI – safety must be designed from the outset rather than reactive AI creates simulated intimacy that children cannot distinguish from authentic human connection, requiring specific age-appropriate protections AI literacy should teach children to understand what’s ‘under the hood’ rather than treating AI as a magic box, with foundational skills learned before AI tool usage Children must be active participants in AI governance and policy design, not just subjects of protection AI has tremendous potential to individualize learning and act as a leveling technology that scales thinking and creativity abilities Real-world evaluation studies are essential to understand actual AI system impacts on children, as current systems are too complex to fully measure in lab settings Risk of cultural homogenization exists if AI models primarily reflect global north perspectives Teachers need significant support as only 41% feel ready to teach AI literacy despite 80% recognizing it as foundational Age assurance technology with privacy-preserving methods is now technically feasible and should be implemented universally
Resolutions and action items
OpenAI committed to continuing their multi-pronged child safety approach including age verification, parental controls, and prohibition of targeted advertising Call for establishment of AI safety institutes specifically focused on children’s safety, similar to existing catastrophic harm safety institutes Recommendation for adoption of Open Age Alliance standards to create portable age keys that travel with children across platforms Need for continued real-world evaluation studies of AI platforms to understand actual impacts on children Implementation of child-centered and child-led design approaches in AI policy development Development of exciting, relevant AI literacy curriculum that connects to real-world problems rather than dry technical content
Unresolved issues
How to operationalize AI literacy pedagogy effectively in diverse educational contexts Balancing global safety standards with local cultural adaptation without creating regulatory loopholes Addressing privacy limitations in some jurisdictions (like Europe) that impact age assurance capabilities Developing solutions that work for unconnected populations and offline contexts Preventing over-dependency on AI that could reduce children’s curiosity and critical thinking abilities Managing the risk of AI systems giving correct answers all the time, potentially reducing children’s sense of struggle and grit development Ensuring representation of children with disabilities in AI design and governance Addressing hidden profiling and commercial content exposure through AI systems
Suggested compromises
Defaulting users to under-18 models when age cannot be determined, erring on the side of protection Allowing children to explore AI environments with guidance and protection rather than complete prohibition Balancing individual agency development with necessary safety guardrails Creating globally interoperable baseline protections while allowing local customization for cultural contexts Implementing ‘no regret moves’ like data privacy and inclusion as foundational elements while allowing flexibility in other areas Focusing on safety-by-design collaboration between companies and governments rather than purely regulatory approaches
Thought Provoking Comments
When a model says to a child, I care, I understand, that’s not conscience, that’s code. But for a child, it can feel very real. And children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available.
This comment powerfully articulates the fundamental difference between AI and social media platforms – the intimate, one-to-one nature of AI interaction that can simulate human connection. It introduces the critical concept of ‘artificial intimacy’ and highlights children’s developmental vulnerability to this deception.
This framed the entire discussion around the unique risks AI poses compared to previous technologies. It established the stakes and influenced subsequent speakers to address the personalized, intimate nature of AI interactions rather than treating AI as just another digital platform.
Speaker: Baroness Joanna Shields
Schools teach us what to think but I believe schools should teach us how to think. How to think critically and how to face failures, how to communicate… We should learn how to write essays. We should learn how to sing, maybe. Then you should use AI. You should use the natural intelligence first. Then start using artificial intelligence.
This insight from a young person directly challenges the current educational paradigm and offers a concrete framework for AI integration. The analogy to calculators – learning math basics before using calculators – provides a practical model for AI literacy that resonates across contexts.
This comment shifted the discussion from abstract concerns about AI risks to concrete pedagogical approaches. It influenced later speakers to focus on foundational skills and agency-building, with Tom Hall specifically referencing the need for ‘real-world curriculum’ and Chris Lehane emphasizing the importance of teaching agency.
Speaker: Rahul John Aju
This technology is an incredibly leveling technology, it scales the ability of anyone to think, to learn, to create, to build, to produce. And the question is, do you actually encourage people to be able to use it that way? Because if so, the way we think about the social contract relationship between capital and labor and how that is calibrated, this technology can have a huge impact on actually giving individuals the ability to control their own labor as owners of it.
This comment elevates the discussion beyond immediate safety concerns to fundamental questions about economic structures and power distribution. It reframes AI not just as a learning tool but as potentially transformative for social and economic relationships.
This broadened the conversation’s scope significantly, prompting Urvashi Aneja to note that ‘agency is not only a factor of individual capacity, but has so much to do with the broader socioeconomic institutional context.’ It shifted focus from technical solutions to systemic considerations.
Speaker: Chris Lehane
We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it even though they seem the best because even though they tell that children are not profiled but they are because we see it… children see less formal advertisement on TikTok. This is fine but actually they are exposed five times more to profiling to the topics with influencers and so on.
This comment introduces crucial empirical evidence that challenges company claims about child protection. It reveals the gap between stated policies and actual outcomes, demonstrating how children can be profiled and influenced in ways that circumvent traditional advertising restrictions.
This evidence-based intervention shifted the discussion toward the need for independent evaluation and real-world testing rather than relying on company assurances. It influenced the final recommendations around systematic impact measurement and independent oversight.
Speaker: Maria Bielikova
It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment.
This powerful analogy reframes the entire approach to child protection in AI environments. Instead of prohibition-based approaches, it suggests guided exploration and accompaniment, which is both more realistic and potentially more effective for building digital literacy.
This analogy influenced the panel’s conclusion toward ‘children at the heart’ approaches and the importance of AI literacy rather than blanket restrictions. It provided a memorable framework that several speakers referenced in their closing remarks.
Speaker: Maria Bielikova
If we have a model that actually gives the right answer or an answer to children all the time, they might actually lose their sense of curiosity… Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know that grit is going to be one of the huge skills of tomorrow.
This paradoxical insight challenges the assumption that AI should always be helpful and accurate. It introduces the counterintuitive idea that struggle and uncertainty might be essential for child development, even if AI could eliminate them.
This comment crystallized concerns about over-dependence on AI and sparked reflection on what human capacities might be lost if AI becomes too seamless. It influenced the final emphasis on maintaining human agency and the importance of foundational skills before AI integration.
Speaker: Thomas Davin
Overall Assessment

These key comments transformed what could have been a technical discussion about AI safety into a profound examination of human development, social structures, and the future of learning. Baroness Shields’ opening established the unique intimacy risks of AI, while Rahul’s youth perspective provided practical wisdom about balancing natural and artificial intelligence. Chris Lehane’s economic framing elevated the stakes to societal transformation, while Maria Bielikova’s empirical evidence and city analogy grounded the discussion in research reality and practical approaches. Thomas Davin’s paradoxical insight about the value of struggle synthesized these themes into a nuanced understanding that effective AI governance for children requires not just safety measures, but careful consideration of what makes us human. Together, these comments created a rich, multi-layered conversation that moved beyond simple risk mitigation to explore fundamental questions about childhood, learning, agency, and human flourishing in an AI-enabled world.

Follow-up Questions
How do you actually embed AI literacy into educational curricula and what specific pedagogical approaches work best?
This was identified as a critical gap where policy makers and educators struggle with the practical implementation of AI literacy programs, moving beyond principles to actual teaching methods
Speaker: Urvashi Aneja
How can we design AI models that intentionally give wrong answers to preserve children’s curiosity and critical thinking skills?
This explores whether AI systems should be designed to encourage struggle and grit in children rather than always providing correct answers, which could diminish curiosity
Speaker: Thomas Davin
How do we create AI solutions that work for unconnected populations and function offline?
This addresses the risk of AI solutions being urban-centered and excluding those who are already marginalized or lack consistent internet connectivity
Speaker: Thomas Davin
How should cultural context and societal norms be incorporated into AI safety measures across different jurisdictions?
This explores the challenge of balancing global safety standards with local cultural values and regulatory frameworks
Speaker: Chris Lehane and Urvashi Aneja
What are the specific technical evaluation methods needed for real-world deployment of AI systems with children?
This addresses the need for contextual evaluations beyond laboratory testing to understand how AI systems actually behave when deployed with children
Speaker: Maria Bielikova
How can we prevent the development of a monoculture in AI that erases cultural diversity and uniqueness?
This explores the risk of global AI models flattening cultural differences and the need to preserve diverse perspectives and values in AI development
Speaker: Baroness Joanna Shields
What governance mechanisms are needed to ensure children are included in AI system governance rather than just being subjects of it?
This addresses how to move beyond consulting children to actually including them in the governance and oversight of AI systems that affect them
Speaker: Thomas Davin
How can privacy limitations in different jurisdictions be reconciled with effective age assurance technologies?
This explores the technical and legal challenges of implementing robust age verification while respecting privacy regulations that vary across regions
Speaker: Chris Lehane

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.