Welfare for All Ensuring Equitable AI in the Worlds Democracies
20 Feb 2026 18:00h - 19:00h
Welfare for All Ensuring Equitable AI in the Worlds Democracies
Session at a glance
Summary
This panel discussion focused on democratizing AI’s impact globally and preventing the concentration of AI’s economic value primarily in Western economies and China, where estimates suggest 70% of AI value could reside. The conversation emphasized that avoiding this outcome requires intentional design, international collaboration, innovation, workforce development, and building trust and security in AI systems.
The panelists discussed the importance of developing international AI safety standards while recognizing the need to customize these standards for different cultures, languages, and local constraints. They highlighted the tension between creating universal standards that enable cross-border technology flow and adapting them to local needs, with examples like Google’s IndIC GenBench supporting 29 Indian languages. The discussion emphasized moving from traditional technology transfer approaches to co-creation models where developers and governments collaborate as partners.
A significant portion of the conversation addressed the persistent AI skills gap and various approaches to bridge it. Microsoft announced commitments to upskill 20 million Indians by 2030, while L&T Technology Services shared their strategy of reaching out to colleges, upskilling current employees during billable hours, and encouraging personal technology development time. The panelists agreed that traditional workforce displacement approaches don’t work in developing economies, requiring more nuanced upskilling strategies.
Security and trust emerged as critical concerns, with discussions about AI-specific cyber threats like prompt injection attacks and the need for multilingual security capabilities. The conversation concluded with reflections on India’s evolution from being viewed as a “back office” to becoming a “front office” for global AI development, emphasizing grassroots impact and the integration of governance with innovation rather than treating them as competing priorities.
Keypoints
Major Discussion Points:
– International AI Standards and Collaboration: The need for global cooperation in developing AI safety standards while allowing for local customization based on cultural, linguistic, and economic differences. Discussion emphasized that standards should be enablers rather than barriers, with examples like Google’s Indiq GenBench for Indian languages.
– Skills Gap and Workforce Development: Addressing the persistent AI skills shortage through public-private partnerships, with specific focus on upskilling existing workers rather than replacement. Companies shared strategies including curriculum partnerships with colleges, continuous employee training during billable hours, and incentivizing technology development through patents and recognition.
– Democratizing AI Access and Preventing Digital Divide: Concerns about AI’s economic value concentrating in Western economies and China, with discussion of intentional efforts needed to ensure broader global participation. Microsoft’s commitment to training 20 million Indians by 2030 and infrastructure investments were highlighted as examples.
– AI Security and Trust Building: Growing cybersecurity threats specific to AI, including prompt injection attacks and multilingual vulnerabilities. Discussion covered the need for “self-defending systems” and AI-versus-AI security approaches, while addressing public trust deficits in AI applications.
– India’s Evolving Role in Global AI: Recognition of India’s transformation from a “back office” to a “front office” for AI development, with emphasis on grassroots impact and local innovation rather than just cost-based services.
Overall Purpose:
The discussion aimed to explore strategies for democratizing AI’s benefits globally, with particular focus on preventing the concentration of AI’s economic value in developed nations and ensuring developing countries, especially India, can participate meaningfully in the AI revolution through international collaboration, skills development, and responsible deployment.
Overall Tone:
The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutions and success stories. While acknowledging significant challenges like the digital divide, skills gaps, and security concerns, speakers consistently emphasized opportunities for partnership and positive outcomes. The tone became particularly enthusiastic when discussing India’s potential and achievements, ending on a note of genuine optimism about AI’s democratization potential despite the challenges ahead.
Speakers
Speakers from the provided list:
– Brad Staples – Panel moderator/host
– Amit Chadha – Managing Director and CEO of L&T Technology Services
– Amanda Craig Deckard – Senior Director, Office of Responsible AI at Microsoft
– Sachin Kakkar – India Site Development, Privacy, Safety and Security at Google
– Lee Tiedrich – Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland, Senior Advisor on the International AI Safety Report
– Julian Waits – Chief Experience Officer with Rapid7
– Audience – Various audience members asking questions
Additional speakers:
– None identified beyond the provided speakers names list
Full session report
This panel discussion at an AI summit in India addressed the critical challenge of democratising artificial intelligence’s global impact and preventing the concentration of AI’s economic value in developed nations. The conversation brought together diverse stakeholders including technology executives, government advisors, and academic researchers to explore practical strategies for ensuring AI benefits reach developing countries and grassroots communities.
The Challenge of AI Economic Concentration
The discussion opened with moderator Brad Staples highlighting concerning trends in AI development. Some estimates suggest that 70% of AI’s economic value risks being concentrated in Western economies and China if present trends continue unchecked. However, Staples emphasised that this concentration is not inevitable but represents a failure of intentional design, international collaboration, and societies coming together. The panel stressed that democratising AI’s impact requires deliberate efforts across multiple dimensions including international cooperation, innovation and research, workforce development, and establishing trust and security frameworks.
International Standards and Local Adaptation
A significant portion of the discussion focused on developing international AI safety standards whilst recognising the critical need for local customisation. Lee Tiedrich, an Inaugurable AI Multidisciplinary Initiative Fellow who worked with approximately 100 experts on the second International AI Safety Report, highlighted both progress and persistent gaps in AI evaluation and evidence development. Whilst organisations like ISO have released initial standards such as 42,001, the pace of development needs acceleration.
Sachin Kakkar from Google illustrated the localisation challenge through the company’s IndIC GenBench initiative, which supports 29 Indian languages, 12 scripts, and 4 language families for fine-tuning and assessing large language models. He emphasised that “copy pasting regulations from international markets to local markets may not always work,” highlighting the need for standards that accommodate different cultures, languages, and local constraints.
Both speakers agreed that effective AI governance requires moving beyond traditional technology transfer approaches toward co-creation models where developers and governments collaborate as genuine partners. Google’s Coalition of Secure AI Framework (COSI), which they are expanding across the Asia-Pacific region, exemplifies this collaborative approach.
Workforce Development and Skills Revolution
The AI skills gap emerged as one of the most pressing challenges. Amit Chadha from L&T Technology Services provided a stark assessment: 40-50% of current engineering consulting work has emerged in just the past five years, whilst 60% of today’s work will become obsolete within the next three to five years.
Microsoft’s Amanda Craig-Deckard, Senior Director of the Office of Responsible AI, outlined their comprehensive approach through the Elevate initiative. The company has committed to upskilling 20 million Indians by 2030, having successfully trained 5.6 million people in the past year. Their “Elevate for Educators” programme works with Indian government ministries, schools, vocational institutes, and higher education institutions to achieve scale through educational multiplication effects.
L&T Technology Services has developed a three-pronged workforce development strategy: engaging with colleges during students’ final year to ensure relevant curricula; upskilling existing employees during billable hours rather than waiting for non-billable periods; and tracking personal technology development time beyond work hours. This approach has yielded measurable results, with the percentage of L&T’s workforce spending personal time on technology development increasing from 19% to 52% over five years, whilst annual patent filings have grown from 50 to 200.
Julian Waits from Rapid7 noted the unprecedented pace of change, acknowledging that skills considered essential today may become obsolete within five years. The panel consensus favoured incentive-based approaches over mandates, focusing on making AI tools immediately useful rather than imposing top-down requirements.
Security, Trust, and Multilingual Vulnerabilities
The security discussion revealed sophisticated understanding of emerging threats and defence strategies. Waits noted that AI could potentially eliminate 60% of current human security tasks, though audience members challenged whether this transition would be manageable given the exponential pace of change.
Kakkar introduced the concept of “self-defending systems” that could reverse the traditional defender’s dilemma in cybersecurity, where attackers need only find one vulnerability whilst defenders must protect all potential attack vectors. AI offers the potential to automate routine defensive work, potentially providing the first aggregate advantage to defenders in cybersecurity history.
Amanda Craig-Deckard highlighted a particularly sophisticated challenge: multilingual AI vulnerabilities. AI systems that perform well in high-resource languages but poorly in low-resource languages create exploitable weaknesses. Attackers can use prompt injection techniques in languages like Tamil to “jailbreak” safety systems and circumvent security measures, connecting digital inclusion directly to cybersecurity.
To address these challenges, Microsoft has collaborated with ML Commons to develop jailbreak benchmarks that include multiple Indic and Asian languages. Google has contributed tools like SynthID, a watermark technique for AI-generated content across text, images, video, and audio.
India’s Grassroots-Focused Approach
A recurring theme was India’s evolution from a “back office” to a “front office” for global AI development. Chadha traced this transformation from initial perceptions in the 1990s through data security concerns definitively addressed during COVID-19, to the current reality of global companies developing products in India for worldwide markets.
Kakkar emphasised that whilst some regions focus on AI governance frameworks, India concentrates on AI’s practical impact for farmers, small schools, NGOs, and local hospitals. This grassroots approach aligns with India’s digital public infrastructure philosophy, exemplified by systems like Aadhaar and UPI that achieved massive scale through practical utility.
Rather than viewing challenges like bandwidth constraints and linguistic diversity as obstacles, Indian AI development treats them as design parameters that can inform more inclusive global solutions.
Addressing Infrastructure Challenges
Despite the optimistic tone, participants acknowledged persistent challenges. An audience member, Rita Soni from the Digital Empowerment Foundation, highlighted the gap between high-level AI discussions and basic connectivity challenges in rural areas, reminding the panel that AI democratisation must address fundamental infrastructure deficits.
Lee Tiedrich raised another challenge: the lack of data standardisation and voluntary sharing frameworks necessary for AI customisation across different regions. Data exchange faces significant friction due to incompatible formats and absence of standard agreements.
Future Challenges and Exponential Change
The discussion concluded with sobering reflections on AI’s exponential pace of development. Audience members challenged the panel’s assumptions about manageable transitions, arguing that rapid economic displacement and power polarisation may outpace adaptation efforts.
Tiedrich emphasised the importance of teaching students “how to think” rather than specific skills, recognising that adaptability and problem-solving capabilities will be more valuable than domain-specific knowledge in a rapidly changing technological landscape.
Conclusions
The panel revealed both significant progress and persistent challenges in democratising AI’s global impact. There was strong consensus on key principles: the need for localised rather than universal AI standards, the importance of public-private partnerships, preference for incentive-based approaches to AI adoption, and recognition that AI security requires proactive, AI-powered defence systems.
However, the conversation highlighted unresolved tensions between the speed of AI development and institutional adaptation. The discussion positioned India as a potential model for AI democratisation through its focus on grassroots impact and inclusive development, though success depends on addressing fundamental infrastructure challenges and ensuring benefits reach beyond urban technology hubs.
Ultimately, democratising AI requires not just technical solutions but fundamental changes in international collaboration, workforce development, and technology governance approaches. The urgency of implementing these changes may be greater than many participants acknowledged, making immediate action essential for achieving an inclusive AI future.
Session transcript
by corporations, by innovators to secure that outcome. And if current trends continue, the majority of AI’s economic value risks being centered in the hands of countries and corporations in the Western economies in China. And some estimates suggest that 70 % of the value could be created and reside in those locations. And I think it’s for us in this context to think a bit about why we don’t need to accept that outcome. It’s by far means not an inevitability. And to democratize the impact of AI, it requires intentional design, it takes international collaboration, and it takes societies coming together to ensure that doesn’t happen. It also takes innovation and research, workforce development, private sector partnerships, and also trust, safety, and security.
And they’re the things we’re going to talk about on the panel today. And my colleagues are extremely well -placed. to share their thoughts and insights on those topics. So let me introduce the panel. We have Amit Chandha, Managing Director and CEO of L &T Technology Services. Good to see you, Amit.
Happy to be here.
Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have you with us. Sachin Kakar from India Site Development, Privacy, Safety and Security at Google. Good to have you with us, Sachin. Thank you for being with us. Lee Tedrick, Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland, Senior Advisor on the International AI Safety Report. Lee, good to have you with us. And last but by no means least, Julian Waits, Chief Experience Officer with Rapid7. Good to have you with us. Good to have you with us. Okay. So without further ado, let’s take a look at international and scientific research collaboration. And, Lee, let me come to you.
Let me pose. Here’s a question. Okay. And, Lee, let me pose. And the second international AI safety report was released just ahead of this conference, something that you’re very much an author of. Let’s start by hearing from you and then maybe, Sachin, I’ll bring you in. What opportunities do you see, Lee, in open international standards to address the technical challenges that we face while also building trust in AI -based systems and services? Which of these, how would you characterize those challenges and which are most critical in a developing country context?
Yeah, thanks for the question, Brad, and there’s a lot here. So the international AI safety report that I worked on with a panel of about 100 experts was just released. And one of the key takeaways from the report is that while we have made a lot of progress over the past year in evaluations and developing evidence, there’s still a long way to go. There’s a gap. And I think, you know, internationally. International standards organizations and similar efforts is a good way to work together to try to fill some of the gaps. ISO has already released one standard, 42 ,001, which is a good start, but we need to accelerate this, and we need to also recognize the fact that standards and evaluation metrics, you know, there’s a tension.
On the one hand, we want them to be able to apply across borders because we want to enable companies to have responsible technology flow across borders. But on the other hand, it’s really important because we all differ in terms of language and culture that we need to be able to customize them for different cultures, norms, languages. And I think, you know, the standards organizations will continue to play an important role. I spent a year working at NIST, the U.S. National Institute of Standards and Technology. One of the NIST projects is working on what we call the zero draft of trying to create a draft that we could then feed into the ISO process, and NIST is trying to collect stakeholder input into that draft.
And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -standards efforts where different stakeholders across different regions can work together. And I think that the ACs, the AI safety institutes across different countries and how they can coordinate. So I think there’s a lot of work to be done, but I think there’s a lot of avenues where we can collaborate together and make sure that we’re addressing the needs of everybody around the globe. Thank you.
Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is copy pasting the regulations or standards from, you know, international markets to local markets may not always work. So localizing them, understanding the needs and constraints of the local area. Google launch Indiq GenBench. It’s a test bench for fine tuning. And assessing the. LLM models for local languages, supporting 29 Indian languages, 12 scripts, and 4 language families. So that shows an example of how we need to localize things. The second point is one -time audit or certification may not work as AI evolves. We need a continuous scanning and auditing to make sure we avoid any temporal drift in these standards and the applications.
So Sachin, let’s build on that. How do governments and developers collaborate in a way that we get the outcome that everyone desires, which is not to see the developed markets race ahead of developing countries? What does that collaboration need to look like?
Yeah, that’s an interesting question. I think at highest level, the way we think to bridge the gap between AI divide is to move away from traditional, traditional transfer approach. to more co -creation where developers and government coming together and and the underlying goal is that standards and regulations are seen as enablers and equalizers not as barriers or compliance hurdle so three specific dimensions in which we believe developers and government can collaborate and Google specifically focuses on number one is open source frameworks and interoperability and standards second capacity building and third is workforce upskilling and research I’ll quickly unpack each one of them so starting with open source frameworks AI is not new to Google we have been working on AI for past decades and remember Alpha fold and we were the first one to share the transformer paper on which all the LLMs are built when we were building AI we were also focusing on best AI practices and safety practices on AI And we have open sourced all the best practices to keep AI safe.
Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk. And India’s digital transformation is characterized by DPI, the digital public infrastructure on which Aadhaar and UPI are built. So they can actually leverage some of these secure AI framework to make sure the malware attacks and the vulnerability in open source components are taken care. Now, standards is one thing. The collaboration goes beyond to adoption of them. And Google has co -built the COSI, Coalition of Secure AI Framework, with various industry partners. And this is what we are expanding in APAC, including India. Now, we are also committed to capacity building. With the government. And which means we need to provide tools and infrastructure, not just standards.
So we are proactively sharing the threat intelligence. We are building tools like SynthID and sharing with the community abroad. SynthID is a watermark technique which goes into the text, image, video, audio, and it can tell you whether it is AI -generated content. So some of these tools are also helping us to make sure our commitment towards standards goes into actual adoption. And finally, upskilling workforce, digital literacy, working with government to make sure the vulnerable section of the society, like elderly and teenagers, are aware of some of these challenges. And giving grants to institutes like IITs to push the frontier of research, like PQC, post -quantum cryptography, are other areas of collaboration between AI developers and the government and academia.
Let me just ask you both a question. Is there a trade -off between setting global standards and regulation? ensuring the right environment for innovation and collaboration?
Oh, yeah, that’s right. And that’s where you can start with the global regulations but then adapt them to the local constraints. Like we have bandwidth constraints in India. We have linguistic diversity. And therefore, the global standard should not become a hurdle for the young startups in India. Rather, they become co -creators in enabling the innovation that can happen and then evolve from there. So it’s a creative tension, and I think the best way is to be adaptive in this situation and eventually evolve to the international standard.
How do you see this interplay, Lee?
Yeah, I think, I mean, kind of in my work, you know, both in government, academia, and I spent 30 years working with the private sector, I think sort of figuring out the standards and the standards that are in place and the values that are in place. evaluation techniques is really key. You know, how are we going to evaluate these systems so we can, they can meet a certain threshold of safety. And then I think the question kind of comes in, you know, afterwards, once we know what it is, you know, should there be regulation or not? You know, I worry a lot of times that when we go too quickly toward the regulation, you know, the best of intentions may be there, but, you know, the technology is moving so quickly, regulators don’t necessarily know how to style the regulations to achieve the goal.
And I think sort of working from the bottom up with the science, developing the evaluation technique, taking into account that we do need to socialize, you know, customize for local markets is really important. And then we can get to the question of, well, should there be a regulation or not? And that’s where, you know, different countries may have different answers, but at least we’re working from a common technical framework and evaluation framework to assess systems. Thank you.
Thank you both. Let’s make a shift to… The conversation towards more public -private… collaboration, which I think we know is at the heart of driving the success that everybody’s looking for. And Sachin was talking a little bit about capacity building. Maybe we focus on those two elements. And Amanda, I’ll come to you and then to Amit. So there’s a persistent skills gap in AI. It’s very apparent and a lot’s being done to try and bridge that here in this country. How are your, has your organization, and I’ll come to you Amit with the same question, how are your organizations grappling with that challenge and also collaborating with government to help to narrow that skills gap?
Thank you. Yes, skills gap is really important. We see it as part of the sort of foundational infrastructure for what we need to work on together as Microsoft with other industry partners, government partners, other local partners. It’s going to take a whole community really working together to do this at scale. And just to take a step back for a moment briefly before I talk more specifically about skills, you know, we kind of see this as part of a holistic effort where you kind of need to support all of the enabling infrastructure for AI deployment, kind of from from the infrastructure layer all the way through sort of realizing value in local use cases. So we actually published on Wednesday a blog from our president, Brad Smith, our chief responsible AI officer, Natasha Crampton, where we talk about sort of five areas where we’re really focused on investing to kind of close the gaps between AI diffusion and the global north and global south.
So we talk about, like, hard infrastructure investment, right, in terms of connectivity, AI compute capacity, scaling is the second part of that plan. And the third part is really thinking about multilingual, multicultural AI capability. And the fourth is really working with local partners on local AI deployment and really what we can learn and what’s going to serve local communities, also what we can learn through that process around how we need to adapt the technology so it’s ready for those local use cases. And then really measuring diffusion so that we actually understand how things are going and how we can do that. And then really measuring diffusion so that we actually understand how things are going and how we can do that.
And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. So that’s the kind of holistic approach that we’re thinking about for public -private partnership. And looking at skilling more specifically, we actually have a new sort of initiative that we launched last July at Microsoft called Microsoft Elevate, which is really bringing together a number of ways that we engage with a community that is going to also be part of skilling everyone at scale, so sort of nonprofit communities, schools, and actually ensuring that they’re equipped with the technology itself, so with cloud compute access and with access to AI.
And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we are really trying to do this at scale. I would say specifically for India is, you know, we last year, early last year, we made this commitment to scale up 10 million Indians by 2030. This year, we upscaled 5 .6 million Indians, and so we actually doubled that commitment to 20 million people by the end of 2030. And one of the ways that we’re doing that is we’re actually, we just announced this week a new Elevate for Educators in India program where we’re partnering with local schools, with vocational institutes, with higher education institutions to sort of teach the teachers, right?
So you can actually work at scale, and we’re working with a number of Indian government ministries in this program to figure out, you know, what, how we can ensure that we have tailored programs for all of those different communities and that we’re thinking holistically about how. You know, we, across those different sort of educational paths, are really meeting people where they are and equipping them to kind of do the next powerful thing with AI.
Thanks, Amanda. And as a business, L &T Tech Services, I mean, part of L &T originating here in India, but now very much involved in global markets. How are you tackling this in terms of addressing the skills gap?
Sure. So thank you. So before I go to skill gap, I do want to make a point on the regulation part. I do believe that too much of regulation can stifle innovation as well. So we’ve got to be careful on how much we do and where do we take it. And then the second part, of course, is to do regulation of traffic control in Delhi for our next event that we have. I think all of us will agree. Let’s get down to skills in a second now. I had to say that because it was a mess in the last two days. I’ve got pictures of myself in an auto rickshaw as well. So if we get down to skill gap, I want to address this three ways.
So I am responsible. I run a company which is potentially India’s first, engineering intelligence company with about 25 ,000 employees. I’ve been CEO for five years. When I took over, we used to be about 15 ,000 employees. We’re about 14 now, we’re about 25 ,000 employees. So, we look at skill gap and I look at skill levels. Three things you have to think about. Whatever work we’re doing in engineering consulting today, I want to say 40 to 50 % of that is new and built in the last five years, did not exist. I also want to say that whatever we are doing today, 60 % will be gone in about three to five years time.
That’s the rate and pace of change. So, while my colleague from Microsoft spoke about skilling school stem as well as colleges, we’re doing two different things to stay current with the changing dynamics or three things. One, we are actually reaching out to colleges. In the last year of their curriculum and we are making sure that the curriculum is going to be in the last five years. in India is contextual to what the industry needs. So we are sending our employees to teach. We are using CSR hours. We are doing all of that to build that up. We are actually participating with NASSCOM as well to be able to do that in the skill development. The second thing we are doing is upskilling our own employees.
Now, again, in a developed economy, it’s very simple that you hear these layoffs that happen all the time and they are not because people don’t have work but because the skill is redundant. So let’s go ahead and get a new set of skills. In an Indian context, my colleague here spoke about that very nicely. You can’t cut and paste. You fire a thousand people, you will actually end up spending half your working hours plus more with the labor commissioner here locally. You can’t do that. So you have to be able to skill people up while they are in the workforce. Now, one thing is developing curriculum, developing modules for them. to go through but the second part is actually making them do it so and normally in a consulting company you would send people to get get coached and do upskilling when they are not billable we actually doing it while they are billable because when they become non billable that’s not when you want it you want it before that right so that’s and it’s a major shift in how we’ve been operating the third thing that we are tracking as an engineering and a technology company is how much of personal time is the employee spending on technology development efforts beyond billing hours to the client so you come in and spend 40 hours right and that’s what you normally work now if you spend another three hours to write a technology paper you file a patent you actually go speak at a symposium all that is towards technology effort beyond billable hours.
The percentage of workforce within the company five years ago that did that was at 19%. Today, that number stands at 52 % of our workforce spends time, personal time to go spend time on technology beyond billable hours. And the net result of that has been we used to file 100 patents per year. We have gone from there, sorry, we used to file 50 patents per year. We have gone to filing 200 patents per year. So the point is that so again, summarizing, one, reach out to the local ecosystem and do it and spend the last year with them. That’s the hook in. Second, upskill the workforce within. And third, beyond just money, find a bigger purpose like technology or betterment of human race with technology to motivate your workforce to actually spend time on that.
And I think that’s what we’ve been doing and we think will be helpful. One last thing and we keep discussing India. But if I look at the US today, and I’ve lived there for 27 years now, is we will need schools to start mandating a certain level of STEM education that has to be done. Today, both my boys went to public schools in Virginia. I can tell you that in some schools, it’s broken. And we don’t do that in the US. We don’t do it in parts of Europe. We will continue to look at different countries for skills. And that is not where we want to be in 20 years time. I’m sorry. Jump in.
Jump in, Julius.
I was going to agree with what you just said. Because Rapid7, like your company, of course, we’re a software company. We’ve basically mandated the use of agentic technologies by our employees, especially the ones in developing countries or countries that aren’t as developed as the United States. What I would tell you also on the education system, which is unique to the US, which is what makes India special. And that’s why we’re in such a wonderful place. It’s because of the technology. we’re so far behind, we’re forced to use labor in other societies that appreciate the use of STEM technology and where it’s embedded in the way that they learn. We have no choice. If we didn’t have foreign workers in the U.S., we would fall behind the rest of the world.
You don’t hear that too often.
Let me just probe a little bit on this. How much is carrot and how much is stick when you’re looking to upskill the workforce and bring them into more of an AI mindset? You’ve got a very bold program at Microsoft reaching across colleges, but you’re also active, I know, in creating the capabilities within the workplace. How much of this, to both of you, is carrot or stick? I was at a dinner in D.C. a few weeks ago where the head of a large media group had told his team they had to be two times more productive by the end of 25 using AI. to stay in their roles and 10 times more productive by the end of 26.
That was an expectation. But it was set very much as a minimum standard and goal. They were putting training programs in place, but there was a clear metric to achieve. What’s your perspective based on how you’ve seen this work?
You mean internally?
Either within Microsoft or within the companies that you collaborate with in training.
In our experience, I think we are much more leaning in the direction of using CareReds. So we have a lot of programs internally that are a mix, I think, in terms of tactics that’s important. Both kind of like, here’s a day -long training or a week -long training program, right? Which I think is really valuable. It gives you an opportunity to really dig in. But also really difficult. Difficult to find the time for. And so we actually have weekly tips. for how colleagues that are in similar roles are using Copilot, for example, internally to have more efficiency in their work. And I feel like that’s the kind of thing where, you know, is that skilling, is that training?
I don’t know, but it certainly is helpful because that’s the kind of thing that in my day -to -day job I can look to and integrate much more easily. And the other thing that we’ve started doing is hackathon -type exercises internally that are not just oriented towards engineering communities, but actually our corporate external legal affairs group, which is not just lawyers, but is a lot of lawyers, for example, having a hackathon that’s really meeting that community where we are and building a Copilot to serve our kind of day -to -day work. And so a lot of, like, different kind of carrot approaches is what we’re doing internally and where we see, I personally can say, like, I feel especially the latter two, it’s just hard to find, like, time to do a deep training program.
But if you integrate sort of into your day -to -day work, make it easy with these kind of carrots, you can really start seeing the impact, and that motivates you to use the technology more.
So, stick is out of the window, you can’t do that anymore, right? But we use carrots and budgets. Okay? When I say carrots, it’s basically appealing to the individual now and their glorification. So if it’s a patent, you’re filing it. The company doesn’t own it, you own it, right? If there’s a paper, you’re writing it. If you’re speaking at a symposium, you’re doing it, right? And that allows them to think. And then we’ve actually spent a lot of time through HR to try and explain that with the pace of change of technology, if you don’t upscale, you don’t change, you actually are facing extinction in about five, ten years’ time. Gone are the days where you can be there on the same technology for 30 years, will not work, right?
So we home in the message, provide that, and then provide the push. we glorify people that file patents, we glorify them within the company so that’s one. Second when I come to budgets, we actually leverage budgets with our segment heads. So they’re given budgets, they’re given training budgets, we also provide them headcount budgets and say can’t exceed. So we’ve been able to actually improve productivity with AI so we used to run on a utilization of productivity basis the metrics all service companies track at about 73 % five years ago. We’re already at 83 % and I think I can push this up another 2 % in terms of productivity levels in the company again leveraging AI and that’s the budget approach that we use but with the seniors.
So it’s a mix of both if I may to be able to manage this and motivate this. But it’s an ongoing exercise.
It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more about security and trust and come to you Julian if I can. So I think we’ve recognized and we’ve heard it in different conversations this week that there’s a trust deficit around the use of AI, certainly in a public context. There is some fear, suspicion, and anxiety in a global context. I’m not talking just about India. YouGov carried out a survey in the U.S. last month, and in the context of fintech, they found that less than 20 % of Americans trust AI in financial services. And they’re also sort of struggling, I think, with some of the cybersecurity questions and issues, which you’re very well placed to address.
So if public trust in AI remains fragile and AI -specific cyber risks are growing, which they clearly are, what are the immediate steps that industry should prioritize to counter those threats? And… Things like prompt injection attacks. How can these solutions be scaled? Thank you. particularly for developing countries?
of seven. So other than the incentives that we’re giving you to learn these technologies, which of course is to the company’s benefit, it’s to your benefit because these skills that you’re learning and that you’re going to be using will translate to the next thing that you do, and it makes you that much better. If we do enough of that, not only are we helping the employees, but we’re helping the societies and the ecosystem that they live in, including in India. I wanted to add one additional area that we’re really focused on to address the kind of AI cyber threats, particularly relevant in India and other areas in the global south. I mentioned that one of the areas that we’re focused on is multilingual and multicultural AI capabilities, and one of the most important foundational reasons for doing that, of course, is that you have an AI that works well.
and in different languages and cultural context is reliable performs well. Another reason is also that AI that is not robust and it’s multilingual and multicultural capabilities does have additional security weaknesses. You mentioned prompt injection attacks and you know one way in which you can think about a prompt injection attack is basically if you have an AI system and you have the sort of safety system around that, someone who is misusing the technology can sort of try to break that safety system or get around it and one of the ways that attackers do that is by using languages that are not well supported in that model or system right so if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages.
Tamil, for example, or some other sort of language that is not really built in to how the model performs, if companies aren’t attuned to that, then an attacker could use that language and jailbreak the system, basically get around the safety system. And so it’s just another reason why it’s really important from our perspective, and we’re partnering with a lot of others in industry and government, so this comes back to a public -private partnership opportunity, to really work on multilingual and multicultural AI capabilities. One of the things that we announced this week is actually there’s a benchmark from an organization called ML Commons, which is a jailbreak benchmark. It’s actually measuring how robust systems are against that kind of prompt injection attack technique.
And we worked with a number of others to really build out the current version of that. which is really English -specific, to include multiple Indic languages and Asian languages in terms of its capability. It’s not going to solve the problem. It’s one step of what we see in the right direction. But I just want to draw that sort of really specific area of focus in India and other areas for thinking about the kind of AI and cyber threats.
That’s wonderful. Thank you.
Can I add a point?
Sure.
So this is about the rise in prominence of AI agents. And we have been constantly investing in self -defending systems, just like a human immune system. As agents grow and they can – the scale and speed at which they can attack infrastructure, the hospitals, the energy grids, we need agents on the other side. And this becomes AI versus AI story, where we are smartly inventing agents. And we believe, first time, with AI. We can reverse the defender’s dilemma. So the dilemma, many of you might already know, attackers have to find just one open wallet in this crowd, but defenders have to protect all the wallets all the time. And first time, AI will give us aggregate advantage to defenders because majority of defenders’ time, 80%, goes in drudgery and skunk work.
And AI can actually automate and uplift that work. So the entire stack of defenders can improve and uplift with AI. And we believe that we’ll be able to build a self -defending adaptive system which can protect us from various vulnerabilities.
Wonderful. Thank you. Well, we’re drawing towards the close of the session, and it’s been a very rich conversation. I just wanted to take a step back and ask you all, you’ve been – most of you have been here all week. And you’ve heard a whole host of different interventions and some very significant investments and initiatives. What are your conclusions? What’s changed? changed in your perspective when you look at AI for the future from your own vantage point? What’s this event given you a new perspective on or crystallized in your minds? Maybe, let me go back to Lee. Do you want to share your thoughts?
It’s reinforced for me, you know, something I’ve seen through a lot of my international work with OECD, with global partnership on AI, just the need for the global cooperation, and not just at the government level, but among all different types of stakeholders, you know, within academia, within industry, within civil society, and working together. And I think, you know, we can sort of pause at this moment and say, you know, if you look at the safety report, we’ve made a lot of progress over the last few years, but we need to continue to work together and not just focus on the harms and the risks that AI can have, but think about the benefits. You know, if we are able to leverage AI, we might be able to, you know, help achieve some of the UN Sustainable Development Goals.
I think one other thing I want to just kind of enter into the mix, you know, the customization of AI for different regions also depends upon data. And a lot of my work has focused on, you know, how do we create voluntary foundations so we can exchange data more easily? Like right now, we don’t have data standardization. So if I want to exchange my data with any of you, my data may be in a different format. As a former lawyer, a lot of my work is also focused on we don’t even have standard agreements. So if we want to exchange data, how can we easily transact and not have all that friction and transaction costs?
You know, we don’t have the Creative Commons licenses right now for data. And if we’re ever going to get to that localization and that ideal point where we’re customizing for different cultures, we’re going to have to have a lot of different tools. we’re going to have to figure out ways where we can voluntarily and responsibly share data. And this has been part of the discussion, but hearing the conversations over the past week kind of underscore the need to continue to advance that work while we work on some of the other topics that we’ve been discussing.
Great. Julian?
More than anything, what this week has taught me is I’m old and this industry is moving.
Okay, so stop saying you’re old. You don’t look old. You look great.
This industry is moving so quickly. Again, skills that are needed and considered to be important today will no longer be necessary in five years. And if the workforce and if the users of the technologies aren’t evolving with it, we all fall behind. So what is a great advantage and opportunity in using AI, the danger is it also cannot. obsolete at the same time. And we need to be very careful of that and how we use it and then how we help, hopefully, to promote this throughout the world in a way that makes it equitable for everyone.
Great. Thank you. Amit, Sachin, any reflections?
Yeah, I think one of my big takeaway from this week was some parts of the world are focused on AI as an influence. Some part of the world is focused on governance of the AI. I think India is focused on impact of AI at the grassroots level. Thinking about how AI will impact a farmer or a small school or an NGO or a small hospital has been the focus. And it resonates with me because mission of my team is to keep everyone safe at scale. And when I say everyone, it’s not just about Google or Alphabet or not just about our billions users. but the entire society, everyone at scale, and how to make sure we become the architect and not just the consumer of AI and make sure it reaches to the grassroots level is one area to think about.
I agree with that. So, of course, outside of the traffic bit, right? What you learn, if you ask me, in the whole week that I’ve seen is that if I, and I’ve been in this business for, I don’t want to date myself, so say a couple of decades and we leave it there. But people used to say India is a back office. That’s how it started in 90s. People said India is a back office. Y2K happened and they said the IT industry will be over, right? Because Y2K, that’s all there is. Today, the IT industry, engineering industry together is $600 billion. We move forward. People said, are you going to take data? And are you going to?
Is data going to get leaked? and then COVID came and India proved yet again there was not a single data leakage that happened from India Inc anywhere. There are some draconian rules. We don’t allow our employees to use USBs, blah, blah, blue, blue. Net result, zero data leakage, absolute privacy and the government comes down very heavily if they get something like this. So they’ve been able to create a safe environment. Move forward. People used to say is India a market? This last week and forget technology companies, if you just walk the floors, you see people like Schneider, you see people like Vertiv, you see others, they are developing products for India. In India, you’re developing products to the world from India and it’s no longer just a cost base.
So if I was to say there’s one thing that I’ve learned in the last week, it is that India is no longer the back office for AI. it is actually the front office for AI for the world and that’s the net summary that I would draw in the entire week that I’ve been here
Thank you, that’s very funny Bill
And I, you know, zooming out to the sort of highest level, one of the things that I really genuinely felt this week that has been very exciting to me is that there is a lot of energy around how to deploy this technology, how to have impact it’s been actually really fun to be in a lot of sessions with students and entrepreneurs that you can really feel the energy and I feel that it has the conversation around governance has come along and felt integrated in a really genuine way as well, if we look at the kind of summit series that kicked off a few years ago at Bletchley, I think it’s fair to say early on the emphasis of the conversation felt very safety and security heavy last year In France, there was a big pivot to trying to think about the opportunity.
And what I see in India this week is a genuine integration of those conversations and a deepening of those conversations. So really, what do we mean when we say impact? What really do we want to see in deploying this technology? And then sort of not taking for granted that, of course, governance actually has to come along with that. You have to really do the deep, hard work around things like multilingual AI. And there’s a real need for a partnership in moving those things forward. And there’s a real need to think about governance steps so that you can have trust in this technology. India actually just passing a law last week thinking about how to mark AI -generated content.
There’s a real sort of recognition that some of those steps are going to be important. And you don’t want to stop or have those steps sort of prevent deployment of the technology or realization of the benefits. But, like, you know, we have to do the deep work together to sort of move. Forward across a dime. A dime. and impact and governance together.
Thank you. Thanks, Amanda. We’ve got a few minutes. If anyone would like to chip in. Great. Hands are going up. The room’s filled, by the way, while we’ve been going along, and it’s been a great conversation. Let’s hand one or two mics out to colleagues around the room, if we can, to the lady here on the front.
Hello? Hello? Okay. Right. Thanks, and I appreciate the comments and the traffic. I think we’ve all got a traffic story. Now, I hear a lot of talk about upskilling, co -creation, which are all very important things. I agree. But what I’m also hearing a lot from, and I’m sure you all are too, is the issue of speed of this technology that could potentially outspace some of this real scenario. So my question to you is, you know, what do we – and this goes to anyone who might want to answer or has some real thoughts on it. What do you think might – be the gaps between that that we would need to address in a transition process between upscaling and real economic displacement.
Who can grab that? Yeah you put the mic Julian you’re gonna give it a go.
It’s a real problem right meaning technology is moving so quickly as I said years ago I would tell young people in technology learn to be the best programmer you can. Now with agentic AI especially with the usage of MCP where you can have multiple agents talking to each other sharing information it’s now learning to be the best user and prompter of the technology understanding the outcomes but there’s gonna be some displacement. It’s you know right now I would tell you AI especially in the security context I can probably eliminate 60 % of the things that humans have to look at today. but there’s still the 40 % where a human has to be involved to make a determination around risk to an entity, whether it’s a government, whether it’s defense, whether it’s a business.
And so it’s really helping them evolve to this next level of user, this next level of programmer, if you want to call it that. And there probably will be some displacement that we just can’t get around.
Gentleman in the front.
I actually have an extension of the same concern that the lady shared. The speed is one aspect, but also I think there’s a whole information arbitrage between the people who are creating and pioneering in the AI space versus the others to whom the information is reaching. And the impact of that on the power polarization and even the democracies. You know, that possibility I sense. And a lot of the conversation that I hear today is assuming that, you know, AI is moving linearly, but I see it moving exponentially. I agree. With a polarizing effect. Yes. Yes. Both. Both the polarizing effect and the effect, you know, like I think 40 % that Serge just spoke about.
For me, that 40 % is not really 40%. It’s just that we want to be very, very careful. But if we were to not care so much about how accurate and how much data standards we have. It could be 100%. You know, it’s very large. You know, I think the displacement can happen very fast. So I’m really concerned about how things are moving. I’m not sure if my concern is being shared by people in the panel.
Anyone want to respond?
I mean, I think we need to focus on AI literacy because, you know, again, the technology is moving so fast. How do we make sure people in their everyday lives? People in the workforce have access to education so they can continue to upskill. and I also think being in academia after having been in the private sector for we won’t go into how many decades but teaching students how to think I think a good student when you’re looking at your career trajectory it’s not just coming out of college with a set of skills but teaching them how to think, how to problem solve and I think it’s really the public -private partnerships that Amanda mentioned with academia is really important because a lot of times the tenured faculty, they don’t know how to teach that to students and bringing people in to tell them this is how you adapt, these are sort of what you’re going to expect in your career and I say this not only from the perspective of being in academia but having two children of my own in their 20s who are just starting their career and sort of expect the unexpected but learn how to be on your toes I think a lot of it is just having the good analytics skills, having good communication skills and if you have those core skills you’re going to be able to adapt and it will carry forward in the future.
Great. I think we’ve got time for one more question. Okay, gentle. Oh, the lady who, sorry, the lady who has the mic. She has the mic.
Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all these tech services that many of these companies are doing. And my question is actually for Amanda, because I think she was the only one who really brought up the digital divide that continues to exist, both in India and across the globe. I actually didn’t feel like I heard very much about how to actually bridge that. Yesterday I didn’t have one of those special passes to go to the events on the 19th, so instead I visited a local nonprofit called the Digital Empowerment Foundation, which has been around for more than 20 years, doing incredible work in rural India.
And they’re simply talking about last -mile Internet connectivity, let alone the enablement or ease. in the critical thinking that Lee just mentioned. So just a few more words on how it is that we can bridge this digital divide and make it more equitable, because I think the more folks are going to be excluded, the more different kinds of problems that we’re going to have.
Yeah, and I think you may have come in after we talked briefly about some of the work that we’re doing to address the digital divide. And for a lot of words, I would point you actually to we published a blog on Wednesday where we talked about investments in five areas that we’re thinking about to close the gaps that we see. And we actually point to the work that we’ve done using our own telemetry to sort of track these gaps and their trajectory and really lifted up our own concerns about the trajectory. And so among the areas of investment, infrastructure is really foundational. And we actually do talk in the blog about of course, infrastructure in terms of like AI.
compute capacity, but actually the fundamentals beyond, like, in terms of connectivity, energy access as really important as well. And then we talked about scaling multilingual and multicultural AI capabilities, really working with local communities on local use cases and the kind of deep work that we can do to sort of help bring the technology to people and see, like, even in agriculture, for example, we at Microsoft Research have done a lot of projects, like, in close collaboration with local communities and try to see, like, how could this serve you and then also learn from how the technology needs to evolve in order to do so better. And basically then also taking a step back and continuing to study diffusion so we understand, like, are our interventions working?
Are they not? If so, what can we learn and how can we improve how we’re intervening?
Okay, so time’s up, everyone. Thank you so much for your contributions and for joining us at different points during the conversation. Thanks to the panelists for a really rich and diverse conversation. It’s been a real pleasure to have you with us. And I think we end with a sense of optimism that no matter what the challenges of the digital divide and those other elements, there’s probably an AI solution to the AI challenges that we’re creating. Thanks. Thank you. Thank you.
Lee Tiedrich
Speech speed
190 words per minute
Speech length
1193 words
Speech time
374 seconds
Global standards must be adaptable to local cultures and languages
Explanation
Lee stresses that AI standards need to be customized for different cultural and linguistic contexts, otherwise they will not be effective across diverse regions.
Evidence
“But on the other hand, it’s really important because we all differ in terms of language and culture that we need to be able to customize them for different cultures, norms, languages.” [3]. “ISO has already released one standard, 42 ,001, which is a good start, but we need to accelerate this, and we need to also recognize the fact that standards and evaluation metrics, you know, there’s a tension.” [16]. “And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -standards efforts where different stakeholders across different regions can work together.” [27].
Major discussion point
International Standards & Collaboration for AI Safety
Topics
Artificial intelligence | Data governance | The enabling environment for digital development
Voluntary data‑sharing frameworks enable cross‑regional AI customization
Explanation
Lee argues that responsible, voluntary data sharing is essential to allow AI models to be tailored to local needs while respecting privacy and governance norms.
Evidence
“we’re going to have to figure out ways where we can voluntarily and responsibly share data.” [63].
Major discussion point
Bridging the Digital Divide & Infrastructure
Topics
Data governance | Closing all digital divides | Artificial intelligence
Global cooperation among all stakeholder groups is essential
Explanation
Lee highlights that safe AI deployment requires collaboration not only among governments but also academia, industry, and civil society worldwide.
Evidence
“need for the global cooperation, not just at the government level, but among all different types of stakeholders, you know, within academia, within industry, within civil society, and working together.” [29].
Major discussion point
Reflections & Future Outlook
Topics
Artificial intelligence | Social and economic development
AI literacy is critical given rapid technological change
Explanation
Lee points out that continuous AI literacy training is needed for the workforce to keep pace with fast‑moving AI developments.
Evidence
“I think we need to focus on AI literacy because, you know, again, the technology is moving so fast.” [84].
Major discussion point
Addressing the AI Skills Gap
Topics
Capacity development | Artificial intelligence
Sachin Kakkar
Speech speed
149 words per minute
Speech length
1152 words
Speech time
462 seconds
Copy‑pasting regulations from other markets often fails; localization is essential
Explanation
Sachin warns that directly transplanting international AI regulations into local contexts can be ineffective, emphasizing the need for adaptation.
Evidence
“I think one of the challenges we see is copy pasting the regulations or standards from, you know, international markets to local markets may not always work.” [4].
Major discussion point
International Standards & Collaboration for AI Safety
Topics
Artificial intelligence | The enabling environment for digital development
Shift to co‑creation: standards as enablers, not barriers
Explanation
Sachin proposes moving from a traditional transfer model to a co‑creation model where developers and governments work together, treating standards as facilitators of innovation.
Evidence
“Rather, they become co -creators in enabling the innovation that can happen and then evolve from there.” [49].
Major discussion point
Government‑Developer Co‑creation & Capacity Building
Topics
Capacity development | Artificial intelligence | The enabling environment for digital development
Open‑source frameworks (Secure AI Framework, COSI) foster interoperability and security
Explanation
Sachin highlights that sharing open‑source security frameworks helps the ecosystem build interoperable and safer AI systems.
Evidence
“Safe SAIF, secure AI framework is something we have shared outside.” [31]. “Google has co -built the COSI, Coalition of Secure AI Framework, with various industry partners.” [54].
Major discussion point
Government‑Developer Co‑creation & Capacity Building
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Provide tools like SynthID and share threat intelligence for capacity building
Explanation
Sachin notes that building tools such as SynthID watermarking and openly sharing threat intelligence strengthens the overall AI security posture.
Evidence
“We are building tools like SynthID and sharing with the community abroad.” [35]. “So we are proactively sharing the threat intelligence.” [36].
Major discussion point
Government‑Developer Co‑creation & Capacity Building
Topics
Capacity development | Building confidence and security in the use of ICTs
Global regulations must be adapted to local constraints like bandwidth and linguistic diversity
Explanation
Sachin emphasizes that AI regulations need to consider practical local limitations such as network bandwidth and language diversity.
Evidence
“And that’s where you can start with the global regulations but then adapt them to the local constraints.” [1].
Major discussion point
Balancing Global Regulation with Innovation
Topics
Artificial intelligence | The enabling environment for digital development
Multilingual models are vulnerable to prompt‑injection attacks in low‑resource languages
Explanation
Sachin points out that attackers can exploit gaps in low‑resource language support, making robustness across all languages a security priority.
Evidence
“if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages.” [125].
Major discussion point
Security, Trust, and AI‑Specific Cyber Risks
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
AI‑driven defensive agents can give defenders an aggregate advantage
Explanation
Sachin argues that as AI agents become more capable, they can be deployed defensively to offset the speed and scale of AI‑enabled attacks.
Evidence
“As agents grow and they can – the scale and speed at which they can attack infrastructure… we need agents on the other side.” [65].
Major discussion point
Security, Trust, and AI‑Specific Cyber Risks
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
ML Commons jailbreak benchmark now includes Indic and Asian languages
Explanation
Sachin notes that the benchmark has been expanded to evaluate robustness of models in multiple Indic and Asian languages.
Evidence
“benchmark … include multiple Indic languages and Asian languages.” [138].
Major discussion point
Security, Trust, and AI‑Specific Cyber Risks
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
India’s AI ecosystem is evolving from a cost base to a global innovator
Explanation
Sachin observes that India is transitioning from a back‑office AI provider to a leading source of AI innovation for the world.
Evidence
“India is no longer the back office for AI.” [150].
Major discussion point
Reflections & Future Outlook
Topics
Artificial intelligence | Social and economic development
Amanda Craig Deckard
Speech speed
180 words per minute
Speech length
1537 words
Speech time
509 seconds
Microsoft Elevate aims to upskill 20 million Indians by 2030
Explanation
Amanda describes Microsoft’s Elevate program, which partners with schools and ministries to provide AI training at massive scale.
Evidence
“This year, we upscaled 5 .6 million Indians, and so we actually doubled that commitment to 20 million people by the end of 2030.” [69]. “new Elevate for Educators in India program where we’re partnering with local schools, with vocational institutes, with higher education institutions to sort of teach the teachers, right?” [70].
Major discussion point
Government‑Developer Co‑creation & Capacity Building
Topics
Capacity development | Artificial intelligence | The enabling environment for digital development
Elevate provides cloud access, training, hackathons and weekly tips
Explanation
The program equips participants with cloud compute, AI tools, and regular learning content to embed AI in daily work.
Evidence
“Microsoft Elevate … bringing together a number of ways that we engage with a community … ensuring that they’re equipped with the technology itself, so with cloud compute access and with access to AI.” [71]. “And we actually have weekly tips.” [100].
Major discussion point
Addressing the AI Skills Gap
Topics
Capacity development | Artificial intelligence
Investing in hard infrastructure: connectivity, AI compute and scaling
Explanation
Amanda outlines Microsoft’s focus on building the foundational infrastructure needed for AI diffusion, including connectivity and compute capacity.
Evidence
“hard infrastructure investment, right, in terms of connectivity, AI compute capacity, scaling is the second part of that plan.” [59]. “compute capacity, but actually the fundamentals beyond, like, in terms of connectivity, energy access as really important as well.” [146].
Major discussion point
Bridging the Digital Divide & Infrastructure
Topics
Closing all digital divides | Information and communication technologies for development | Artificial intelligence
Amit Chadha
Speech speed
164 words per minute
Speech length
1854 words
Speech time
675 seconds
L&T three‑pronged upskilling: curriculum, on‑the‑job training, and personal project incentives
Explanation
Amit explains L&T’s strategy to develop AI skills through curriculum development, integrating training into billable work, and rewarding personal innovation.
Evidence
“we actually doing it while they are billable because when they become non billable that’s not when you want it…” [105]. “developing curriculum, developing modules for them.” [107]. “we glorify people that file patents, we glorify them within the company so that’s one.” [114].
Major discussion point
Addressing the AI Skills Gap
Topics
Capacity development | Artificial intelligence
Mix of carrot incentives and budget allocations drives AI adoption
Explanation
Amit highlights that combining financial incentives with recognition (patents, publications) motivates employees to adopt AI technologies.
Evidence
“we glorify people that file patents, we glorify them within the company so that’s one.” [114]. “we have a lot of programs internally that are a mix, I think, in terms of tactics that’s important.” [118].
Major discussion point
Addressing the AI Skills Gap
Topics
Capacity development | Artificial intelligence
Focus on AI impact at grassroots level – farmers, schools, NGOs
Explanation
Amit stresses that L&T is directing AI solutions toward rural and community use cases to move India from a back‑office to a front‑office AI role.
Evidence
“I think India is focused on impact of AI at the grassroots level.” [147]. “India is no longer the back office for AI.” [150].
Major discussion point
Bridging the Digital Divide & Infrastructure
Topics
Closing all digital divides | Social and economic development | Artificial intelligence
Julian Waits
Speech speed
172 words per minute
Speech length
437 words
Speech time
152 seconds
Self‑defending AI systems are needed to counter rapid, large‑scale threats
Explanation
Julian argues that AI must incorporate adaptive, self‑defending mechanisms similar to an immune system to keep pace with sophisticated attacks.
Evidence
“we will be able to build a self‑defending adaptive system which can protect us from various vulnerabilities.” [140].
Major discussion point
Security, Trust, and AI‑Specific Cyber Risks
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
AI skills become obsolete within five years
Explanation
Julian points out the rapid turnover of AI competencies, emphasizing the need for continuous learning.
Evidence
“skills that are needed and considered to be important today will no longer be necessary in five years.” [162].
Major discussion point
Reflections & Future Outlook
Topics
Capacity development | Artificial intelligence
Brad Staples
Speech speed
151 words per minute
Speech length
1335 words
Speech time
529 seconds
Trade‑off between global standards and regulation
Explanation
Brad raises the question of whether setting universal AI standards might conflict with the need for regulation, highlighting a tension in policy design.
Evidence
“Is there a trade -off between setting global standards and regulation?” [2].
Major discussion point
International Standards & Collaboration for AI Safety
Topics
Artificial intelligence | The enabling environment for digital development
Trust deficit and need for public‑private collaboration
Explanation
Brad notes that a lack of public trust in AI and rising AI‑specific cyber risks require coordinated industry and government action.
Evidence
“So there’s a persistent skills gap in AI.” [89]. “So there’s a trust deficit around the use of AI, certainly in a public context.” [123].
Major discussion point
Security, Trust, and AI‑Specific Cyber Risks
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Audience
Speech speed
157 words per minute
Speech length
519 words
Speech time
198 seconds
Perception of a widening skills gap
Explanation
Audience members comment that the rapid evolution of AI creates a growing skills gap that must be addressed through education and training.
Evidence
“So, I hear a lot of talk about upskilling, co -creation, which are all very important things.” [52]. “So, I hear a lot of talk about upskilling, co -creation, which are all very important things.” [52].
Major discussion point
Addressing the AI Skills Gap
Topics
Capacity development | Artificial intelligence
Agreements
Agreement points
Need for localized rather than universal AI standards and regulations
Speakers
– Lee Tiedrich
– Sachin Kakkar
Arguments
Need for accelerated international standards development while recognizing cultural customization requirements
Importance of localizing global standards rather than copy-pasting regulations, with continuous auditing as AI evolves
Summary
Both speakers agree that while international standards are important, they must be customized for local cultures, languages, and constraints rather than simply copying global frameworks
Topics
Artificial intelligence | The enabling environment for digital development | Closing all digital divides
Importance of public-private partnerships for AI development and deployment
Speakers
– Sachin Kakkar
– Amanda Craig Deckard
– Amit Chadha
Arguments
Standards should be enablers and equalizers, not barriers, requiring co-creation between developers and governments
Holistic approach needed including infrastructure, multilingual AI capabilities, and local partnerships to scale skills development
Three-pronged approach: college curriculum updates, upskilling current workforce while billable, and encouraging personal technology development time
Summary
All three speakers emphasize the critical need for collaboration between government, industry, and educational institutions to effectively develop and deploy AI technologies
Topics
Financial mechanisms | Capacity development | The enabling environment for digital development
Rapid pace of technological change requiring continuous adaptation and learning
Speakers
– Amit Chadha
– Julian Waits
– Lee Tiedrich
Arguments
40-50% of current engineering work is new from last five years, with 60% of today’s work becoming obsolete in 3-5 years
Industry moving too quickly for traditional security approaches, requiring evolution to next-level users and prompters of technology
Importance of AI literacy and teaching students how to think and problem-solve rather than just specific skills
Summary
All speakers acknowledge the unprecedented speed of technological change and the need for continuous skill development and adaptation rather than one-time training
Topics
Capacity development | The digital economy | Artificial intelligence
Preference for incentive-based rather than mandate-based approaches to AI adoption
Speakers
– Amanda Craig Deckard
– Amit Chadha
Arguments
Carrot-based approaches more effective than mandates, using weekly tips, hackathons, and integrated day-to-day work applications
Combination of individual recognition (patents, papers) and budget-based productivity improvements to motivate workforce development
Summary
Both speakers advocate for positive incentives and recognition rather than punitive measures to encourage AI adoption and skill development
Topics
Capacity development | The digital economy
AI security requires proactive, AI-powered defense systems
Speakers
– Sachin Kakkar
– Julian Waits
– Amanda Craig Deckard
Arguments
Self-defending systems using AI agents to reverse the defender’s dilemma and automate 80% of defensive drudgery work
Industry moving too quickly for traditional security approaches, requiring evolution to next-level users and prompters of technology
Multilingual AI robustness critical for security, as attackers exploit low-resource language vulnerabilities for prompt injection attacks
Summary
All speakers agree that traditional cybersecurity approaches are insufficient and that AI-powered defensive systems are necessary to counter AI-enabled threats
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Similar viewpoints
Both emphasize the importance of building technical foundations and understanding local needs before implementing top-down solutions, whether in regulation or digital access
Speakers
– Lee Tiedrich
– Amanda Craig Deckard
Arguments
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints
Digital divide remains a critical challenge requiring infrastructure investment and local community collaboration
Topics
The enabling environment for digital development | Closing all digital divides
Both speakers view India as moving beyond being a service provider to becoming an innovation center focused on practical, ground-level AI applications
Speakers
– Sachin Kakkar
– Amit Chadha
Arguments
India focused on grassroots AI impact for farmers, schools, NGOs, and hospitals rather than just governance or influence
India transitioning from back office to front office for AI development, creating products for global markets
Topics
Social and economic development | The digital economy | Information and communication technologies for development
Both emphasize the critical importance of multilingual AI capabilities and sharing security tools to address vulnerabilities and ensure inclusive AI development
Speakers
– Amanda Craig Deckard
– Sachin Kakkar
Arguments
Multilingual AI robustness critical for security, as attackers exploit low-resource language vulnerabilities for prompt injection attacks
Sharing of tools like SynthID for AI-generated content detection and secure AI frameworks with industry partners
Topics
Building confidence and security in the use of ICTs | Closing all digital divides | Artificial intelligence
Unexpected consensus
Regulation should follow rather than precede technical understanding
Speakers
– Lee Tiedrich
– Amit Chadha
Arguments
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints
Three-pronged approach: college curriculum updates, upskilling current workforce while billable, and encouraging personal technology development time
Explanation
Unexpected consensus between an academic/government advisor and a business CEO that regulation can stifle innovation if implemented too quickly without proper technical foundation
Topics
The enabling environment for digital development | Artificial intelligence
AI will fundamentally change rather than just augment human work
Speakers
– Julian Waits
– Amit Chadha
– Audience
Arguments
Industry moving too quickly for traditional security approaches, requiring evolution to next-level users and prompters of technology
40-50% of current engineering work is new from last five years, with 60% of today’s work becoming obsolete in 3-5 years
Concern about exponential rather than linear AI development creating rapid economic displacement and power polarization
Explanation
Broad consensus across industry practitioners and audience that AI represents a fundamental transformation rather than incremental change, with significant implications for workforce displacement
Topics
The digital economy | Capacity development | Social and economic development
Overall assessment
Summary
Strong consensus on need for localized AI approaches, public-private partnerships, continuous learning, incentive-based adoption, and AI-powered security. Unexpected agreement on regulation timing and transformational nature of AI change.
Consensus level
High level of consensus across diverse stakeholders (industry, academia, government) suggests mature understanding of AI challenges and practical approaches to address them. This alignment could facilitate more effective policy development and implementation strategies.
Differences
Different viewpoints
Timing and approach to AI regulation
Speakers
– Lee Tiedrich
– Amit Chadha
Arguments
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints
Too much of regulation can stifle innovation as well. So we’ve got to be careful on how much we do and where do we take it
Summary
Lee advocates for developing scientific foundations and evaluation techniques first before regulation, while Amit warns that excessive regulation can stifle innovation. Both are cautious about regulation but Lee emphasizes building technical frameworks first, while Amit focuses on limiting regulatory scope.
Topics
The enabling environment for digital development | Artificial intelligence
Workforce development approach – mandates vs incentives
Speakers
– Amanda Craig Deckard
– Amit Chadha
Arguments
Carrot-based approaches more effective than mandates, using weekly tips, hackathons, and integrated day-to-day work applications
Combination of individual recognition (patents, papers) and budget-based productivity improvements to motivate workforce development
Summary
Amanda emphasizes purely carrot-based approaches with integrated learning, while Amit uses a combination of carrots (recognition) and budget constraints/productivity targets as motivational tools. Amit explicitly states ‘stick is out of the window’ but still uses budget pressures.
Topics
Capacity development | The digital economy
Speed and scale of AI displacement
Speakers
– Julian Waits
– Audience
Arguments
AI can eliminate 60% of security tasks but 40% still requires human risk determination
Suggests that the 40% of work requiring human involvement could actually be much smaller if accuracy standards were relaxed
Summary
Julian maintains that 40% of work will still require human involvement for risk determination, while audience members argue this percentage could be much smaller if accuracy standards were relaxed, suggesting more rapid and complete displacement is possible.
Topics
The digital economy | Building confidence and security in the use of ICTs
Unexpected differences
Fundamental nature of AI development trajectory
Speakers
– Multiple panelists
– Audience
Arguments
Various speakers discuss linear progression and manageable transitions
Concern about exponential rather than linear AI development creating rapid economic displacement and power polarization
Explanation
While panelists generally discussed AI development as manageable with proper planning and gradual transitions, audience members challenged this assumption by arguing AI is developing exponentially rather than linearly, creating more urgent displacement concerns. This represents a fundamental disagreement about the pace and nature of AI advancement.
Topics
Artificial intelligence | The digital economy | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The discussion revealed relatively low levels of fundamental disagreement among panelists, with most conflicts centered on implementation approaches rather than core objectives. Key areas of disagreement included the timing and scope of AI regulation, specific methods for workforce development, and assessments of displacement speed.
Disagreement level
Low to moderate disagreement level among panelists, but more significant tension between panelist optimism and audience concerns about exponential AI development. The implications suggest a need for more robust dialogue between AI industry leaders and broader stakeholders about the pace and scale of AI transformation.
Partial agreements
Partial agreements
Both agree on the need for international standards that can be customized locally, but Lee focuses on the tension between cross-border applicability and cultural customization, while Sachin emphasizes the inadequacy of copy-pasting and the need for continuous adaptation.
Speakers
– Lee Tiedrich
– Sachin Kakkar
Arguments
Need for accelerated international standards development while recognizing cultural customization requirements
Importance of localizing global standards rather than copy-pasting regulations, with continuous auditing as AI evolves
Topics
Artificial intelligence | The enabling environment for digital development
Both recognize the need for comprehensive, multi-faceted approaches to skills development, but Amanda focuses on infrastructure and partnerships while Amit emphasizes direct workforce intervention and personal motivation.
Speakers
– Amanda Craig Deckard
– Amit Chadha
Arguments
Holistic approach needed including infrastructure, multilingual AI capabilities, and local partnerships to scale skills development
Three-pronged approach: college curriculum updates, upskilling current workforce while billable, and encouraging personal technology development time
Topics
Capacity development | Social and economic development
Both recognize infrastructure and collaboration challenges for AI deployment, but Lee focuses specifically on data standardization and sharing frameworks, while Amanda emphasizes broader infrastructure needs including connectivity and energy access.
Speakers
– Lee Tiedrich
– Amanda Craig Deckard
Arguments
Need for data standardization and voluntary sharing frameworks to enable AI customization for different regions
Digital divide remains a critical challenge requiring infrastructure investment and local community collaboration
Topics
Closing all digital divides | Data governance | Information and communication technologies for development
Similar viewpoints
Both emphasize the importance of building technical foundations and understanding local needs before implementing top-down solutions, whether in regulation or digital access
Speakers
– Lee Tiedrich
– Amanda Craig Deckard
Arguments
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints
Digital divide remains a critical challenge requiring infrastructure investment and local community collaboration
Topics
The enabling environment for digital development | Closing all digital divides
Both speakers view India as moving beyond being a service provider to becoming an innovation center focused on practical, ground-level AI applications
Speakers
– Sachin Kakkar
– Amit Chadha
Arguments
India focused on grassroots AI impact for farmers, schools, NGOs, and hospitals rather than just governance or influence
India transitioning from back office to front office for AI development, creating products for global markets
Topics
Social and economic development | The digital economy | Information and communication technologies for development
Both emphasize the critical importance of multilingual AI capabilities and sharing security tools to address vulnerabilities and ensure inclusive AI development
Speakers
– Amanda Craig Deckard
– Sachin Kakkar
Arguments
Multilingual AI robustness critical for security, as attackers exploit low-resource language vulnerabilities for prompt injection attacks
Sharing of tools like SynthID for AI-generated content detection and secure AI frameworks with industry partners
Topics
Building confidence and security in the use of ICTs | Closing all digital divides | Artificial intelligence
Takeaways
Key takeaways
International AI standards must balance global consistency with local customization for different cultures, languages, and market constraints
The AI skills gap requires urgent attention through public-private partnerships, with workforce development needing to be continuous rather than one-time due to rapid technology evolution
India is positioning itself as a front office for global AI development rather than just a back office, focusing on grassroots impact for farmers, schools, and healthcare
AI security requires self-defending systems using AI agents to counter AI-powered attacks, with multilingual robustness being critical to prevent exploitation of language vulnerabilities
Carrot-based approaches (recognition, hackathons, integrated learning) are more effective than mandates for workforce AI adoption and upskilling
The risk of AI economic value concentrating in developed markets can be mitigated through intentional democratization efforts, co-creation approaches, and open-source frameworks
AI development is moving exponentially rather than linearly, creating concerns about rapid economic displacement and the need for enhanced AI literacy
Data standardization and voluntary sharing frameworks are essential for enabling AI customization across different regions and cultures
Resolutions and action items
Microsoft committed to upskilling 20 million Indians by 2030 through their Elevate for Educators program
Google launched IndIC GenBench supporting 29 Indian languages for LLM model assessment and fine-tuning
Industry partners agreed to expand the Coalition of Secure AI Framework (COSI) in APAC including India
Development of multilingual jailbreak benchmarks including Indic and Asian languages through ML Commons collaboration
NIST to continue collecting stakeholder input for the zero draft to feed into ISO standards process
Companies to mandate use of agentic technologies by employees, especially in developing countries
L&T Technology Services to continue three-pronged approach: college curriculum updates, workforce upskilling during billable hours, and encouraging personal technology development time
Unresolved issues
How to address the speed of AI development potentially outpacing upskilling and transition processes
Managing economic displacement as AI could potentially automate much more than the conservative 60% estimate
Bridging the fundamental digital divide including last-mile internet connectivity and energy access in rural areas
Information arbitrage between AI pioneers and general population leading to power polarization
Lack of data standardization and standard agreements for voluntary data sharing across borders
Balancing innovation with appropriate levels of regulation without stifling technological advancement
Addressing the gap between AI governance frameworks and actual grassroots implementation
Suggested compromises
Start with global standards but adapt them to local constraints and capabilities rather than direct copy-pasting
Use bottom-up approach beginning with technical evaluation frameworks before implementing regulation
Combine carrot-based incentives with budget constraints to motivate workforce development without punitive measures
Focus on teaching core analytical and communication skills alongside AI literacy to enable adaptation
Implement continuous auditing and scanning rather than one-time certification as AI systems evolve
Balance global AI safety standards with local customization needs for different languages and cultures
Integrate governance conversations with deployment and impact discussions rather than treating them separately
Thought provoking comments
Whatever work we’re doing in engineering consulting today, I want to say 40 to 50% of that is new and built in the last five years, did not exist. I also want to say that whatever we are doing today, 60% will be gone in about three to five years time. That’s the rate and pace of change.
Speaker
Amit Chadha
Reason
This comment provides a stark quantification of the unprecedented pace of technological change, making abstract concepts of disruption concrete and immediate. It challenges the traditional approach to workforce development and highlights why conventional training methods are insufficient.
Impact
This observation fundamentally reframed the skills discussion from incremental upskilling to radical workforce transformation. It led other panelists to acknowledge the urgency of the challenge, with Julian later emphasizing how quickly skills become obsolete and the need for continuous evolution.
AI that is not robust in its multilingual and multicultural capabilities does have additional security weaknesses… if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages… an attacker could use that language and jailbreak the system.
Speaker
Amanda Craig Deckard
Reason
This insight brilliantly connects two seemingly separate issues – digital inclusion and cybersecurity – revealing how inequality in AI development creates systemic vulnerabilities. It demonstrates that diversity isn’t just about fairness but about fundamental system security.
Impact
This comment elevated the conversation beyond social equity to strategic necessity, showing how multilingual AI capabilities are essential for security. It prompted Sachin to discuss AI-versus-AI defense systems and reinforced the technical imperative for inclusive AI development.
First time, with AI, we can reverse the defender’s dilemma… majority of defenders’ time, 80%, goes in drudgery and skunk work. And AI can actually automate and uplift that work.
Speaker
Sachin Kakkar
Reason
This reframes AI from a source of new security threats to a potential solution for a fundamental asymmetry in cybersecurity. The ‘defender’s dilemma’ concept provides a clear framework for understanding why AI could be transformative for security rather than just disruptive.
Impact
This shifted the security discussion from defensive concerns about AI risks to offensive opportunities for AI solutions. It introduced the concept of ‘self-defending systems’ and positioned AI as potentially advantageous to defenders for the first time in cybersecurity history.
India is no longer the back office for AI. It is actually the front office for AI for the world.
Speaker
Amit Chadha
Reason
This powerful reframing challenges decades of perception about India’s role in global technology, moving from cost-based services to innovation leadership. It encapsulates a fundamental shift in global AI development dynamics.
Impact
This comment served as a capstone to the entire discussion, synthesizing themes about India’s unique approach to grassroots AI implementation. It reinforced the conference’s central theme that developing countries can lead rather than follow in AI development.
Copy pasting the regulations or standards from international markets to local markets may not always work. So localizing them, understanding the needs and constraints of the local area… We need a continuous scanning and auditing to make sure we avoid any temporal drift.
Speaker
Sachin Kakkar
Reason
This challenges the assumption that global standards can be universally applied, introducing the critical concepts of localization and temporal drift in AI governance. It highlights the dynamic nature of AI systems that makes one-time compliance insufficient.
Impact
This observation shaped the entire regulatory discussion, leading Lee to emphasize the tension between global standards and local customization. It established the framework for discussing adaptive, culturally-sensitive AI governance throughout the panel.
If we didn’t have foreign workers in the U.S., we would fall behind the rest of the world. You don’t hear that too often.
Speaker
Julian Waits
Reason
This candid admission challenges American technological exceptionalism and acknowledges the critical dependence on global talent, particularly from countries like India. It’s a rare moment of vulnerability from a US perspective.
Impact
This comment added authenticity to the discussion about global AI collaboration and reinforced arguments about the importance of international partnerships. It validated other panelists’ points about the global nature of AI development and the need for inclusive approaches.
Overall assessment
These key comments fundamentally shaped the discussion by challenging conventional assumptions and introducing new frameworks for understanding AI development challenges. Amit’s quantification of technological change velocity established urgency that permeated subsequent discussions. Amanda’s connection between multilingual AI and security transformed the inclusion conversation from moral imperative to strategic necessity. Sachin’s insights on localization and the defender’s dilemma provided new conceptual frameworks that other panelists built upon. The cumulative effect was a discussion that moved beyond surface-level policy recommendations to deeper structural insights about AI development, security, and global collaboration. The conversation evolved from addressing AI challenges to reimagining AI opportunities, particularly positioning developing countries as potential leaders rather than followers in responsible AI development.
Follow-up questions
How can we create voluntary foundations and standardized agreements for data exchange to enable AI customization for different regions?
Speaker
Lee Tiedrich
Explanation
Lee emphasized that customization of AI for different regions depends on data, but currently there’s no data standardization, no standard agreements for data exchange, and no Creative Commons licenses for data, creating friction and transaction costs
How can we develop Creative Commons-style licenses specifically for data sharing in AI development?
Speaker
Lee Tiedrich
Explanation
This was identified as a critical gap needed to enable voluntary and responsible data sharing for AI localization across different cultures and regions
What are the specific gaps and transition processes needed to address economic displacement caused by AI’s rapid advancement?
Speaker
Audience member
Explanation
An audience member raised concerns about the speed of AI technology potentially outpacing upskilling efforts and creating economic displacement, asking what gaps need to be addressed in the transition process
How can we address the information arbitrage and power polarization effects of AI’s exponential growth on democracies?
Speaker
Audience member
Explanation
An audience member expressed concern about the information gap between AI pioneers and others, and the potential polarizing effects on democratic institutions as AI moves exponentially rather than linearly
What specific strategies can effectively bridge the digital divide beyond infrastructure, particularly for last-mile connectivity in rural areas?
Speaker
Rita Soni (Audience member)
Explanation
Rita Soni highlighted that while digital divide was mentioned, there wasn’t enough discussion on practical solutions for bridging it, especially regarding basic internet connectivity in rural areas before even considering AI enablement
How can we better measure and track AI diffusion to ensure interventions are working effectively?
Speaker
Amanda Craig Deckard
Explanation
Amanda mentioned the need for measuring diffusion to understand if interventions are working and how they can be improved, but this area requires further development and research
How can continuous scanning and auditing systems be developed to prevent temporal drift in AI standards and applications?
Speaker
Sachin Kakkar
Explanation
Sachin noted that one-time audits or certifications may not work as AI evolves, requiring research into continuous monitoring systems
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

