Welfare for All Ensuring Equitable AI in the Worlds Democracies
20 Feb 2026 18:00h - 19:00h
Welfare for All Ensuring Equitable AI in the Worlds Democracies
Summary
The panel opened by warning that without intervention most of AI’s economic value could become concentrated in Western corporations and China, with estimates that up to 70 % may reside there, and argued that this outcome is not inevitable but must be democratized through intentional design and international collaboration [2-4][6-7].
Lee explained that the newly released international AI safety report highlights progress in evaluation but stresses a gap that can be narrowed by expanding standards such as ISO 42001 and by accelerating work through NIST drafts and regional pre-standard initiatives like the Hiroshima AI process, while also allowing cultural and linguistic customization [33-41][44-46]. Sachin added that simply copying regulations across markets often fails, citing Google’s Indiq GenBench which supports 29 Indian languages as an example of needed localization, and emphasized the necessity of continuous auditing to prevent drift as AI models evolve [48-56].
Building on this, participants described co-creation models where developers and governments act as enablers rather than barriers, with Google promoting open-source frameworks, the secure AI framework (SAIF), tools such as SynthID, and the Coalition of Secure AI (COSI) to support capacity building and workforce upskilling [61-78]. Amit highlighted that excessive regulation can stifle innovation and that his company balances carrots-recognition for patents, papers, and speaking engagements-with budget allocations for training to boost productivity, reporting an increase from 73 % to 83 % utilization [131-133][218-233]. Microsoft’s Amanda detailed the “Microsoft Elevate” initiative, which aims to upskill 20 million Indians by 2030 through partnerships with schools, vocational institutes, and government ministries, and stressed a holistic approach that includes infrastructure, multilingual AI, local deployment, and diffusion measurement [105-126].
The discussion then turned to trust and security, with Brad noting a U.S. fintech survey showing less than 20 % public trust in AI and raising concerns about prompt-injection attacks, especially in low-resource languages [241-244]. Amit explained that attackers can exploit unsupported languages to jailbreak models, and that expanding the ML Commons jailbreak benchmark to include Indic and Asian languages is a step toward mitigating such threats [250-257]. Sachin argued that AI-driven defensive agents can reverse the traditional “defender’s dilemma” by automating routine security work, thereby giving defenders an aggregate advantage over attackers [263-272].
Lee concluded that global cooperation across government, academia, industry, and civil society is essential, and that standardizing data formats and licensing will enable the regional customization needed for AI to support UN Sustainable Development Goals [283-294]. Amit reflected that India’s focus is on grassroots AI impact for farmers, schools, and hospitals, positioning the country as a front-office for AI rather than a back-office [307-313]. Amanda observed that recent weeks have integrated governance with impact discussions, emphasizing multilingual AI, partnership, and recent Indian legislation on AI-generated content as signs of mature, responsible deployment [334-342].
The panel closed on an optimistic note, asserting that despite digital-divide challenges, collaborative AI solutions and continued public-private partnerships can address both technical and societal risks [409-414].
Keypoints
Major discussion points
– International collaboration and adaptable standards are essential to prevent AI value concentration in a few Western or Chinese entities. Brad frames the risk of a 70 % concentration of AI’s economic value in those regions and stresses the need for intentional design and global cooperation [2-4][6-8]. Lee highlights the role of ISO and NIST in drafting standards while warning that standards must be customizable for different languages and cultures [33-41][44-46]. Sachin adds that simply copying regulations across borders often fails, underscoring the need to localize standards for diverse markets [48-52].
– Public-private capacity-building and upskilling programs are critical to bridge the AI skills gap, especially in developing economies. Amanda describes Microsoft’s “Elevate” initiative, its multi-year commitment to train millions of Indians and its partnership with schools and ministries [105-124][125-126]. Amit explains L&T’s three-pronged approach: collaborating with colleges, upskilling current staff while they remain billable, and incentivising personal research and patent work [149-158][165-174]. Sachin stresses continuous auditing of AI models because a one-time certification cannot keep pace with rapid model evolution [56-57].
– A tension exists between global regulation/standards and the need to foster innovation; a co-creation, adaptive approach is advocated. Brad asks whether setting global standards may hinder innovation [79-87]. Lee warns that moving too quickly to regulation can outpace technological change, suggesting a bottom-up, science-first evaluation framework before deciding on rules [89-96]. Sachin argues that global standards should be a flexible “creative tension” that adapts to local constraints such as bandwidth and linguistic diversity [81-87]. Amit echoes the concern that over-regulation can stifle innovation and calls for careful, targeted rules [130-133].
– Security, trust, and AI-specific cyber risks (e.g., prompt-injection) require immediate, scalable defenses, including multilingual robustness. Brad notes the public’s low trust in AI-driven financial services and asks for priority actions against threats like prompt injection [237-244]. Amit points out that models weak in low-resource languages become attack vectors, and he cites work on a multilingual jailbreak benchmark to harden systems [250-258]. Sachin describes the development of self-defending AI agents that act like an immune system, aiming to give defenders an aggregate advantage over attackers [263-272].
– Localization-multilingual AI, culturally aware data standards, and open data frameworks-is vital for equitable AI deployment. Sachin’s “IndiQ GenBench” demonstrates the need for language-specific evaluation tools [53-55]. Amit stresses that poor support for Indic languages can enable prompt-injection attacks, reinforcing the push for multilingual capabilities [250-258]. Lee calls for voluntary data-exchange foundations and standardized data licenses to reduce friction in cross-regional collaborations [286-294].
Overall purpose / goal of the discussion
The panel convened to explore how the global AI ecosystem can be democratized: preventing concentration of economic value, establishing inclusive standards, building a skilled workforce, ensuring security and trust, and tailoring AI to diverse cultural and linguistic contexts through coordinated public-private and international effort.
Overall tone
The conversation begins with a measured, forward-looking tone emphasizing collaboration and optimism about shaping AI’s future. As the dialogue progresses, it becomes more technical and urgent, addressing concrete challenges such as regulatory trade-offs, skills shortages, and security threats. By the closing remarks, the tone shifts to reflective optimism, acknowledging the rapid pace of change while expressing confidence that coordinated action can deliver equitable, trustworthy AI outcomes.
Speakers
– Amit Chadha – Managing Director and CEO, L&T Technology Services – expertise in AI engineering, technology services, and industry leadership. [S1]
– Sachin Kakkar – India Site Development, Privacy, Safety and Security, Google – expertise in AI privacy, safety, security, and localization for Indian markets. [S4]
– Amanda Craig Deckard – Senior Director, Office of Responsible AI, Microsoft – expertise in responsible AI policy, AI governance, skilling initiatives, and digital inclusion.
– Brad Staples – Panel moderator/host – expertise in AI policy discussion facilitation and moderation. [S6]
– Lee Tiedrich – Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland; Senior Advisor on the International AI Safety Report – expertise in AI safety standards, international collaboration, and evaluation frameworks.
– Julian Waits – Chief Experience Officer, Rapid7 – expertise in cybersecurity, AI security, and AI-driven threat mitigation.
– Audience – Various participants (e.g., Yuv from Senegal, Professor Charu from the Indian Institute of Public Administration, Dr. Nazar) – expertise not specified. [S13][S14][S15]
Additional speakers:
– Steve – Briefly addressed by Sachin Kakkar (“Thanks, Steve”); role and expertise not identified in the transcript.
1. Opening framing (Brad Staples) – Brad warned that, if current trends continue, roughly 70 % of AI’s economic value could become concentrated in Western corporations and China [2-4]. He emphasized that this outcome is not inevitable; democratising AI will require intentional design, international collaboration, and coordinated action across research, workforce development, private-sector partnerships, and robust safety and security measures [6-8].
2. International standards & evaluation (Lee Tiedrich) – Lee presented the second International AI Safety Report, noting progress in evaluation techniques but a persistent gap [33-36]. He highlighted ISO 42 001 as an early standard and described the NIST “zero draft” that will feed into future ISO work [38-41]. Regional pre-standard initiatives such as the Hiroshima AI process were cited as venues for cross-regional stakeholder cooperation [42-46].
3. Localisation & continuous compliance (Sachin Kakkar) – Sachin argued that transplanting regulations across markets often fails, underscoring the need for localisation. He showcased Google’s Indiq GenBench, which supports 29 Indian languages, 12 scripts, and four language families for fine-tuning large language models [52-55]. He warned that one-off audits are insufficient for evolving models and advocated continuous scanning pipelines to prevent temporal drift [56-57].
4. Co-creation model (Sachin Kakkar) – Building on localisation, Sachin described an open-source “Safe SAIF (Secure AI Framework)” and tools such as SynthID, a watermarking technique that flags AI-generated content [63-66][74-76]. He also outlined the Coalition of Secure AI (COSI), an industry partnership expanding across APAC, and stressed capacity-building through threat-intelligence sharing and workforce upskilling [69-78].
5. Global standards vs. regulation trade-off – Lee argued that regulation should follow robust, evidence-based evaluation frameworks, warning that regulators often lag behind rapid technological change [89-95]. Amit cautioned that excessive regulation can stifle innovation and must be applied judiciously [130-133]. Sachin framed the tension as a “creative tension”: global standards should be adapted to local constraints such as bandwidth and linguistic diversity, turning potential hurdles into co-creation opportunities [81-87].
6. Skills-gap & public-private upskilling
Microsoft (Amanda Craig-Deckard)* – The “Elevate” programme aims to upskill 20 million Indians by 2030, combining cloud-compute access, AI tools, and partnerships with schools, vocational institutes, and government ministries. A dedicated “Elevate for Educators” track trains teachers at scale. The effort sits within a five-pillar strategy: hard infrastructure, AI compute capacity, multilingual AI, local deployment, and systematic diffusion measurement [108-118][119-126].
L&T Technology Services (Amit Chadha)* – L&T pursues a three-pronged approach: (i) collaborating with colleges to refresh curricula for the next five years [149-152]; (ii) upskilling current employees while they remain billable, integrating training into project work [156-164]; and (iii) incentivising personal research time, raising patent filings from 50 to 200 per year and increasing staff contributions beyond billable hours from 19 % to 52 %, which lifted productivity from 73 % to 83 % [165-174][230-233].
Rapid7 (Julian Waits)* – Julian noted that Rapid7 relies on talent from abroad to maintain its competitive edge and highlighted that AI can eliminate 60 % of the routine tasks humans currently perform [300-304][366-368].
7. Carrot-vs-stick discussion – Brad asked whether global standards might hinder innovation. Amanda responded with mixed-tactics such as weekly tips and hackathons to encourage adoption [89-95]. Amit advocated a “carrot-only” approach, using patent-glory incentives and budget allocations to motivate compliance [130-133].
8. Trust, security & multilingual vulnerabilities – Brad cited a YouGov survey showing fewer than 20 % of Americans trust AI in financial services [241-244]. Amit explained that models weak in low-resource languages become attack vectors; attackers can jailbreak systems by exploiting unsupported languages such as Tamil [250-252]. To counter this, Google contributed to an expanded ML Commons jailbreak benchmark that now includes Indic and other Asian languages [255-257].
9. AI-driven defensive agents (Sachin Kakkar) – Sachin described emerging AI-driven defensive agents that act like an immune system. He argued that, unlike the traditional “defender’s dilemma,” AI can automate 80 % of routine defensive work, giving defenders an aggregate advantage [263-272].
10. Data-exchange & licensing (Lee Tiedrich) – Lee called for voluntary data-exchange foundations, standard agreements, and Creative-Commons-like licences for data to lower friction in cross-border collaborations [280-285].
11. Closing reflections – Lee stressed that global cooperation across government, academia, industry, and civil society remains vital for mitigating AI risks and achieving the UN Sustainable Development Goals [283-286]. Amit highlighted India’s shift from a “back-office” to a “front-office” AI role, focusing on grassroots impact for farmers, schools, and hospitals [307-313]. Julian warned that the industry’s rapid pace could render today’s skills obsolete within five years, yet reiterated that AI can automate a large share of security tasks while still requiring human judgement [300-304][366-368]. Amanda reiterated that recent weeks have seen genuine integration of governance with impact discussions, citing India’s new law on marking AI-generated content as evidence of mature, responsible deployment [334-342].
12. Audience Q&A – Rita Soni raised digital-divide concerns, referencing the Digital Empowerment Foundation, prompting discussion of Microsoft’s infrastructure and diffusion work [360-376]. An audience member warned of “information arbitrage” between AI creators and broader society, echoing fears that exponential AI growth could outpace up-skilling and exacerbate power polarisation [387-393]. Lee responded by advocating AI literacy, problem-solving skills, and lifelong learning as remedies [387-393].
13. Consensus pillars – The panel converged on four pillars: (1) globally coordinated yet locally adaptable AI standards; (2) evidence-based evaluation before regulation; (3) large-scale public-private capacity-building programmes to close the skills gap; and (4) multilingual AI as both an inclusion and security imperative [6-8][33-41][108-126][263-272]. Points of disagreement centred on the timing and extent of regulation-Amit warned against over-regulation while Lee urged robust technical evaluation first-and on the magnitude of imminent job displacement, with Julian optimistic about AI-assisted roles and an audience member fearing far-greater displacement [89-95][130-133][300-304][360-376].
14. Action items – Expand ISO 42 001 and NIST drafts to cover cultural variations; extend multilingual benchmarks; scale Microsoft Elevate; reinforce L&T’s incentive-based upskilling; develop self-defending AI agents; implement continuous audit pipelines; and establish voluntary data-sharing frameworks [38-41][250-257][119-126][165-174][263-272][280-285].
by corporations, by innovators to secure that outcome. And if current trends continue, the majority of AI’s economic value risks being centered in the hands of countries and corporations in the Western economies in China. And some estimates suggest that 70 % of the value could be created and reside in those locations. And I think it’s for us in this context to think a bit about why we don’t need to accept that outcome. It’s by far means not an inevitability. And to democratize the impact of AI, it requires intentional design, it takes international collaboration, and it takes societies coming together to ensure that doesn’t happen. It also takes innovation and research, workforce development, private sector partnerships, and also trust, safety, and security.
And they’re the things we’re going to talk about on the panel today. And my colleagues are extremely well -placed. to share their thoughts and insights on those topics. So let me introduce the panel. We have Amit Chandha, Managing Director and CEO of L &T Technology Services. Good to see you, Amit.
Happy to be here.
Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have you with us. Sachin Kakar from India Site Development, Privacy, Safety and Security at Google. Good to have you with us, Sachin. Thank you for being with us. Lee Tedrick, Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland, Senior Advisor on the International AI Safety Report. Lee, good to have you with us. And last but by no means least, Julian Waits, Chief Experience Officer with Rapid7. Good to have you with us. Good to have you with us. Okay. So without further ado, let’s take a look at international and scientific research collaboration. And, Lee, let me come to you.
Let me pose. Here’s a question. Okay. And, Lee, let me pose. And the second international AI safety report was released just ahead of this conference, something that you’re very much an author of. Let’s start by hearing from you and then maybe, Sachin, I’ll bring you in. What opportunities do you see, Lee, in open international standards to address the technical challenges that we face while also building trust in AI -based systems and services? Which of these, how would you characterize those challenges and which are most critical in a developing country context?
Yeah, thanks for the question, Brad, and there’s a lot here. So the international AI safety report that I worked on with a panel of about 100 experts was just released. And one of the key takeaways from the report is that while we have made a lot of progress over the past year in evaluations and developing evidence, there’s still a long way to go. There’s a gap. And I think, you know, internationally. International standards organizations and similar efforts is a good way to work together to try to fill some of the gaps. ISO has already released one standard, 42 ,001, which is a good start, but we need to accelerate this, and we need to also recognize the fact that standards and evaluation metrics, you know, there’s a tension.
On the one hand, we want them to be able to apply across borders because we want to enable companies to have responsible technology flow across borders. But on the other hand, it’s really important because we all differ in terms of language and culture that we need to be able to customize them for different cultures, norms, languages. And I think, you know, the standards organizations will continue to play an important role. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. One of the NIST projects is working on what we call the zero draft of trying to create a draft that we could then feed into the ISO process, and NIST is trying to collect stakeholder input into that draft.
And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -standards efforts where different stakeholders across different regions can work together. And I think that the ACs, the AI safety institutes across different countries and how they can coordinate. So I think there’s a lot of work to be done, but I think there’s a lot of avenues where we can collaborate together and make sure that we’re addressing the needs of everybody around the globe. Thank you.
Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is copy pasting the regulations or standards from, you know, international markets to local markets may not always work. So localizing them, understanding the needs and constraints of the local area. Google launch Indiq GenBench. It’s a test bench for fine tuning. And assessing the. LLM models for local languages, supporting 29 Indian languages, 12 scripts, and 4 language families. So that shows an example of how we need to localize things. The second point is one -time audit or certification may not work as AI evolves. We need a continuous scanning and auditing to make sure we avoid any temporal drift in these standards and the applications.
So Sachin, let’s build on that. How do governments and developers collaborate in a way that we get the outcome that everyone desires, which is not to see the developed markets race ahead of developing countries? What does that collaboration need to look like?
Yeah, that’s an interesting question. I think at highest level, the way we think to bridge the gap between AI divide is to move away from traditional, traditional transfer approach. to more co -creation where developers and government coming together and and the underlying goal is that standards and regulations are seen as enablers and equalizers not as barriers or compliance hurdle so three specific dimensions in which we believe developers and government can collaborate and Google specifically focuses on number one is open source frameworks and interoperability and standards second capacity building and third is workforce upskilling and research I’ll quickly unpack each one of them so starting with open source frameworks AI is not new to Google we have been working on AI for past decades and remember Alpha fold and we were the first one to share the transformer paper on which all the LLMs are built when we were building AI we were also focusing on best AI practices and safety practices on AI And we have open sourced all the best practices to keep AI safe.
Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk. And India’s digital transformation is characterized by DPI, the digital public infrastructure on which Aadhaar and UPI are built. So they can actually leverage some of these secure AI framework to make sure the malware attacks and the vulnerability in open source components are taken care. Now, standards is one thing. The collaboration goes beyond to adoption of them. And Google has co -built the COSI, Coalition of Secure AI Framework, with various industry partners. And this is what we are expanding in APAC, including India. Now, we are also committed to capacity building. With the government. And which means we need to provide tools and infrastructure, not just standards.
So we are proactively sharing the threat intelligence. We are building tools like SynthID and sharing with the community abroad. SynthID is a watermark technique which goes into the text, image, video, audio, and it can tell you whether it is AI -generated content. So some of these tools are also helping us to make sure our commitment towards standards goes into actual adoption. And finally, upskilling workforce, digital literacy, working with government to make sure the vulnerable section of the society, like elderly and teenagers, are aware of some of these challenges. And giving grants to institutes like IITs to push the frontier of research, like PQC, post -quantum cryptography, are other areas of collaboration between AI developers and the government and academia.
Let me just ask you both a question. Is there a trade -off between setting global standards and regulation? ensuring the right environment for innovation and collaboration?
Oh, yeah, that’s right. And that’s where you can start with the global regulations but then adapt them to the local constraints. Like we have bandwidth constraints in India. We have linguistic diversity. And therefore, the global standard should not become a hurdle for the young startups in India. Rather, they become co -creators in enabling the innovation that can happen and then evolve from there. So it’s a creative tension, and I think the best way is to be adaptive in this situation and eventually evolve to the international standard.
How do you see this interplay, Lee?
Yeah, I think, I mean, kind of in my work, you know, both in government, academia, and I spent 30 years working with the private sector, I think sort of figuring out the standards and the standards that are in place and the values that are in place. evaluation techniques is really key. You know, how are we going to evaluate these systems so we can, they can meet a certain threshold of safety. And then I think the question kind of comes in, you know, afterwards, once we know what it is, you know, should there be regulation or not? You know, I worry a lot of times that when we go too quickly toward the regulation, you know, the best of intentions may be there, but, you know, the technology is moving so quickly, regulators don’t necessarily know how to style the regulations to achieve the goal.
And I think sort of working from the bottom up with the science, developing the evaluation technique, taking into account that we do need to socialize, you know, customize for local markets is really important. And then we can get to the question of, well, should there be a regulation or not? And that’s where, you know, different countries may have different answers, but at least we’re working from a common technical framework and evaluation framework to assess systems. Thank you.
Thank you both. Let’s make a shift to… The conversation towards more public -private… collaboration, which I think we know is at the heart of driving the success that everybody’s looking for. And Sachin was talking a little bit about capacity building. Maybe we focus on those two elements. And Amanda, I’ll come to you and then to Amit. So there’s a persistent skills gap in AI. It’s very apparent and a lot’s being done to try and bridge that here in this country. How are your, has your organization, and I’ll come to you Amit with the same question, how are your organizations grappling with that challenge and also collaborating with government to help to narrow that skills gap?
Thank you. Yes, skills gap is really important. We see it as part of the sort of foundational infrastructure for what we need to work on together as Microsoft with other industry partners, government partners, other local partners. It’s going to take a whole community really working together to do this at scale. And just to take a step back for a moment briefly before I talk more specifically about skills, you know, we kind of see this as part of a holistic effort where you kind of need to support all of the enabling infrastructure for AI deployment, kind of from from the infrastructure layer all the way through sort of realizing value in local use cases. So we actually published on Wednesday a blog from our president, Brad Smith, our chief responsible AI officer, Natasha Crampton, where we talk about sort of five areas where we’re really focused on investing to kind of close the gaps between AI diffusion and the global north and global south.
So we talk about, like, hard infrastructure investment, right, in terms of connectivity, AI compute capacity, scaling is the second part of that plan. And the third part is really thinking about multilingual, multicultural AI capability. And the fourth is really working with local partners on local AI deployment and really what we can learn and what’s going to serve local communities, also what we can learn through that process around how we need to adapt the technology so it’s ready for those local use cases. And then really measuring diffusion so that we actually understand how things are going and how we can do that. And then really measuring diffusion so that we actually understand how things are going and how we can do that.
And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. So that’s the kind of holistic approach that we’re thinking about for public -private partnership. And looking at skilling more specifically, we actually have a new sort of initiative that we launched last July at Microsoft called Microsoft Elevate, which is really bringing together a number of ways that we engage with a community that is going to also be part of skilling everyone at scale, so sort of nonprofit communities, schools, and actually ensuring that they’re equipped with the technology itself, so with cloud compute access and with access to AI.
And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we are really trying to do this at scale. I would say specifically for India is, you know, we last year, early last year, we made this commitment to scale up 10 million Indians by 2030. This year, we upscaled 5 .6 million Indians, and so we actually doubled that commitment to 20 million people by the end of 2030. And one of the ways that we’re doing that is we’re actually, we just announced this week a new Elevate for Educators in India program where we’re partnering with local schools, with vocational institutes, with higher education institutions to sort of teach the teachers, right? So you can actually work at scale, and we’re working with a number of Indian government ministries in this program to figure out, you know, what, how we can ensure that we have tailored programs for all of those different communities and that we’re thinking holistically about how.
You know, we, across those different sort of educational paths, are really meeting people where they are and equipping them to kind of do the next powerful thing with AI.
Thanks, Amanda. And as a business, L &T Tech Services, I mean, part of L &T originating here in India, but now very much involved in global markets. How are you tackling this in terms of addressing the skills gap?
Sure. So thank you. So before I go to skill gap, I do want to make a point on the regulation part. I do believe that too much of regulation can stifle innovation as well. So we’ve got to be careful on how much we do and where do we take it. And then the second part, of course, is to do regulation of traffic control in Delhi for our next event that we have. I think all of us will agree. Let’s get down to skills in a second now. I had to say that because it was a mess in the last two days. I’ve got pictures of myself in an auto rickshaw as well. So if we get down to skill gap, I want to address this three ways.
So I am responsible. I run a company which is potentially India’s first, engineering intelligence company with about 25 ,000 employees. I’ve been CEO for five years. When I took over, we used to be about 15 ,000 employees. We’re about 14 now, we’re about 25 ,000 employees. So, we look at skill gap and I look at skill levels. Three things you have to think about. Whatever work we’re doing in engineering consulting today, I want to say 40 to 50 % of that is new and built in the last five years, did not exist. I also want to say that whatever we are doing today, 60 % will be gone in about three to five years time. That’s the rate and pace of change. So, while my colleague from Microsoft spoke about skilling school stem as well as colleges, we’re doing two different things to stay current with the changing dynamics or three things.
One, we are actually reaching out to colleges. In the last year of their curriculum and we are making sure that the curriculum is going to be in the last five years. in India is contextual to what the industry needs. So we are sending our employees to teach. We are using CSR hours. We are doing all of that to build that up. We are actually participating with NASSCOM as well to be able to do that in the skill development. The second thing we are doing is upskilling our own employees. Now, again, in a developed economy, it’s very simple that you hear these layoffs that happen all the time and they are not because people don’t have work but because the skill is redundant.
So let’s go ahead and get a new set of skills. In an Indian context, my colleague here spoke about that very nicely. You can’t cut and paste. You fire a thousand people, you will actually end up spending half your working hours plus more with the labor commissioner here locally. You can’t do that. So you have to be able to skill people up while they are in the workforce. Now, one thing is developing curriculum, developing modules for them. to go through but the second part is actually making them do it so and normally in a consulting company you would send people to get get coached and do upskilling when they are not billable we actually doing it while they are billable because when they become non billable that’s not when you want it you want it before that right so that’s and it’s a major shift in how we’ve been operating the third thing that we are tracking as an engineering and a technology company is how much of personal time is the employee spending on technology development efforts beyond billing hours to the client so you come in and spend 40 hours right and that’s what you normally work now if you spend another three hours to write a technology paper you file a patent you actually go speak at a symposium all that is towards technology effort beyond billable hours.
The percentage of workforce within the company five years ago that did that was at 19%. Today, that number stands at 52 % of our workforce spends time, personal time to go spend time on technology beyond billable hours. And the net result of that has been we used to file 100 patents per year. We have gone from there, sorry, we used to file 50 patents per year. We have gone to filing 200 patents per year. So the point is that so again, summarizing, one, reach out to the local ecosystem and do it and spend the last year with them. That’s the hook in. Second, upskill the workforce within. And third, beyond just money, find a bigger purpose like technology or betterment of human race with technology to motivate your workforce to actually spend time on that.
And I think that’s what we’ve been doing and we think will be helpful. One last thing and we keep discussing India. But if I look at the US today, and I’ve lived there for 27 years now, is we will need schools to start mandating a certain level of STEM education that has to be done. Today, both my boys went to public schools in Virginia. I can tell you that in some schools, it’s broken. And we don’t do that in the US. We don’t do it in parts of Europe. We will continue to look at different countries for skills. And that is not where we want to be in 20 years time. I’m sorry. Jump in. Jump in, Julius.
I was going to agree with what you just said. Because Rapid7, like your company, of course, we’re a software company. We’ve basically mandated the use of agentic technologies by our employees, especially the ones in developing countries or countries that aren’t as developed as the United States. What I would tell you also on the education system, which is unique to the US, which is what makes India special. And that’s why we’re in such a wonderful place. It’s because of the technology. we’re so far behind, we’re forced to use labor in other societies that appreciate the use of STEM technology and where it’s embedded in the way that they learn. We have no choice. If we didn’t have foreign workers in the U .S., we would fall behind the rest of the world.
You don’t hear that too often.
Let me just probe a little bit on this. How much is carrot and how much is stick when you’re looking to upskill the workforce and bring them into more of an AI mindset? You’ve got a very bold program at Microsoft reaching across colleges, but you’re also active, I know, in creating the capabilities within the workplace. How much of this, to both of you, is carrot or stick? I was at a dinner in D .C. a few weeks ago where the head of a large media group had told his team they had to be two times more productive by the end of 25 using AI. to stay in their roles and 10 times more productive by the end of 26.
That was an expectation. But it was set very much as a minimum standard and goal. They were putting training programs in place, but there was a clear metric to achieve. What’s your perspective based on how you’ve seen this work?
You mean internally?
Either within Microsoft or within the companies that you collaborate with in training.
In our experience, I think we are much more leaning in the direction of using CareReds. So we have a lot of programs internally that are a mix, I think, in terms of tactics that’s important. Both kind of like, here’s a day -long training or a week -long training program, right? Which I think is really valuable. It gives you an opportunity to really dig in. But also really difficult. Difficult to find the time for. And so we actually have weekly tips. for how colleagues that are in similar roles are using Copilot, for example, internally to have more efficiency in their work. And I feel like that’s the kind of thing where, you know, is that skilling, is that training?
I don’t know, but it certainly is helpful because that’s the kind of thing that in my day -to -day job I can look to and integrate much more easily. And the other thing that we’ve started doing is hackathon -type exercises internally that are not just oriented towards engineering communities, but actually our corporate external legal affairs group, which is not just lawyers, but is a lot of lawyers, for example, having a hackathon that’s really meeting that community where we are and building a Copilot to serve our kind of day -to -day work. And so a lot of, like, different kind of carrot approaches is what we’re doing internally and where we see, I personally can say, like, I feel especially the latter two, it’s just hard to find, like, time to do a deep training program.
But if you integrate sort of into your day -to -day work, make it easy with these kind of carrots, you can really start seeing the impact, and that motivates you to use the technology more.
So, stick is out of the window, you can’t do that anymore, right? But we use carrots and budgets. Okay? When I say carrots, it’s basically appealing to the individual now and their glorification. So if it’s a patent, you’re filing it. The company doesn’t own it, you own it, right? If there’s a paper, you’re writing it. If you’re speaking at a symposium, you’re doing it, right? And that allows them to think. And then we’ve actually spent a lot of time through HR to try and explain that with the pace of change of technology, if you don’t upscale, you don’t change, you actually are facing extinction in about five, ten years’ time. Gone are the days where you can be there on the same technology for 30 years, will not work, right?
So we home in the message, provide that, and then provide the push. we glorify people that file patents, we glorify them within the company so that’s one. Second when I come to budgets, we actually leverage budgets with our segment heads. So they’re given budgets, they’re given training budgets, we also provide them headcount budgets and say can’t exceed. So we’ve been able to actually improve productivity with AI so we used to run on a utilization of productivity basis the metrics all service companies track at about 73 % five years ago. We’re already at 83 % and I think I can push this up another 2 % in terms of productivity levels in the company again leveraging AI and that’s the budget approach that we use but with the seniors.
So it’s a mix of both if I may to be able to manage this and motivate this. But it’s an ongoing exercise.
It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more about security and trust and come to you Julian if I can. So I think we’ve recognized and we’ve heard it in different conversations this week that there’s a trust deficit around the use of AI, certainly in a public context. There is some fear, suspicion, and anxiety in a global context. I’m not talking just about India. YouGov carried out a survey in the U .S. last month, and in the context of fintech, they found that less than 20 % of Americans trust AI in financial services. And they’re also sort of struggling, I think, with some of the cybersecurity questions and issues, which you’re very well placed to address.
So if public trust in AI remains fragile and AI -specific cyber risks are growing, which they clearly are, what are the immediate steps that industry should prioritize to counter those threats? And… Things like prompt injection attacks. How can these solutions be scaled? Thank you. particularly for developing countries?
of seven. So other than the incentives that we’re giving you to learn these technologies, which of course is to the company’s benefit, it’s to your benefit because these skills that you’re learning and that you’re going to be using will translate to the next thing that you do, and it makes you that much better. If we do enough of that, not only are we helping the employees, but we’re helping the societies and the ecosystem that they live in, including in India. I wanted to add one additional area that we’re really focused on to address the kind of AI cyber threats, particularly relevant in India and other areas in the global south. I mentioned that one of the areas that we’re focused on is multilingual and multicultural AI capabilities, and one of the most important foundational reasons for doing that, of course, is that you have an AI that works well.
and in different languages and cultural context is reliable performs well. Another reason is also that AI that is not robust and it’s multilingual and multicultural capabilities does have additional security weaknesses. You mentioned prompt injection attacks and you know one way in which you can think about a prompt injection attack is basically if you have an AI system and you have the sort of safety system around that, someone who is misusing the technology can sort of try to break that safety system or get around it and one of the ways that attackers do that is by using languages that are not well supported in that model or system right so if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages.
Tamil, for example, or some other sort of language that is not really built in to how the model performs, if companies aren’t attuned to that, then an attacker could use that language and jailbreak the system, basically get around the safety system. And so it’s just another reason why it’s really important from our perspective, and we’re partnering with a lot of others in industry and government, so this comes back to a public -private partnership opportunity, to really work on multilingual and multicultural AI capabilities. One of the things that we announced this week is actually there’s a benchmark from an organization called ML Commons, which is a jailbreak benchmark. It’s actually measuring how robust systems are against that kind of prompt injection attack technique.
And we worked with a number of others to really build out the current version of that. which is really English -specific, to include multiple Indic languages and Asian languages in terms of its capability. It’s not going to solve the problem. It’s one step of what we see in the right direction. But I just want to draw that sort of really specific area of focus in India and other areas for thinking about the kind of AI and cyber threats.
That’s wonderful. Thank you.
Can I add a point?
Sure.
So this is about the rise in prominence of AI agents. And we have been constantly investing in self -defending systems, just like a human immune system. As agents grow and they can – the scale and speed at which they can attack infrastructure, the hospitals, the energy grids, we need agents on the other side. And this becomes AI versus AI story, where we are smartly inventing agents. And we believe, first time, with AI. We can reverse the defender’s dilemma. So the dilemma, many of you might already know, attackers have to find just one open wallet in this crowd, but defenders have to protect all the wallets all the time. And first time, AI will give us aggregate advantage to defenders because majority of defenders’ time, 80%, goes in drudgery and skunk work.
And AI can actually automate and uplift that work. So the entire stack of defenders can improve and uplift with AI. And we believe that we’ll be able to build a self -defending adaptive system which can protect us from various vulnerabilities.
Wonderful. Thank you. Well, we’re drawing towards the close of the session, and it’s been a very rich conversation. I just wanted to take a step back and ask you all, you’ve been – most of you have been here all week. And you’ve heard a whole host of different interventions and some very significant investments and initiatives. What are your conclusions? What’s changed? changed in your perspective when you look at AI for the future from your own vantage point? What’s this event given you a new perspective on or crystallized in your minds? Maybe, let me go back to Lee. Do you want to share your thoughts?
It’s reinforced for me, you know, something I’ve seen through a lot of my international work with OECD, with global partnership on AI, just the need for the global cooperation, and not just at the government level, but among all different types of stakeholders, you know, within academia, within industry, within civil society, and working together. And I think, you know, we can sort of pause at this moment and say, you know, if you look at the safety report, we’ve made a lot of progress over the last few years, but we need to continue to work together and not just focus on the harms and the risks that AI can have, but think about the benefits. You know, if we are able to leverage AI, we might be able to, you know, help achieve some of the UN Sustainable Development Goals.
I think one other thing I want to just kind of enter into the mix, you know, the customization of AI for different regions also depends upon data. And a lot of my work has focused on, you know, how do we create voluntary foundations so we can exchange data more easily? Like right now, we don’t have data standardization. So if I want to exchange my data with any of you, my data may be in a different format. As a former lawyer, a lot of my work is also focused on we don’t even have standard agreements. So if we want to exchange data, how can we easily transact and not have all that friction and transaction costs?
You know, we don’t have the Creative Commons licenses right now for data. And if we’re ever going to get to that localization and that ideal point where we’re customizing for different cultures, we’re going to have to have a lot of different tools. we’re going to have to figure out ways where we can voluntarily and responsibly share data. And this has been part of the discussion, but hearing the conversations over the past week kind of underscore the need to continue to advance that work while we work on some of the other topics that we’ve been discussing.
Great. Julian?
More than anything, what this week has taught me is I’m old and this industry is moving.
Okay, so stop saying you’re old. You don’t look old. You look great.
This industry is moving so quickly. Again, skills that are needed and considered to be important today will no longer be necessary in five years. And if the workforce and if the users of the technologies aren’t evolving with it, we all fall behind. So what is a great advantage and opportunity in using AI, the danger is it also cannot. obsolete at the same time. And we need to be very careful of that and how we use it and then how we help, hopefully, to promote this throughout the world in a way that makes it equitable for everyone.
Great. Thank you. Amit, Sachin, any reflections?
Yeah, I think one of my big takeaway from this week was some parts of the world are focused on AI as an influence. Some part of the world is focused on governance of the AI. I think India is focused on impact of AI at the grassroots level. Thinking about how AI will impact a farmer or a small school or an NGO or a small hospital has been the focus. And it resonates with me because mission of my team is to keep everyone safe at scale. And when I say everyone, it’s not just about Google or Alphabet or not just about our billions users. but the entire society, everyone at scale, and how to make sure we become the architect and not just the consumer of AI and make sure it reaches to the grassroots level is one area to think about.
I agree with that. So, of course, outside of the traffic bit, right? What you learn, if you ask me, in the whole week that I’ve seen is that if I, and I’ve been in this business for, I don’t want to date myself, so say a couple of decades and we leave it there. But people used to say India is a back office. That’s how it started in 90s. People said India is a back office. Y2K happened and they said the IT industry will be over, right? Because Y2K, that’s all there is. Today, the IT industry, engineering industry together is $600 billion. We move forward. People said, are you going to take data? And are you going to?
Is data going to get leaked? and then COVID came and India proved yet again there was not a single data leakage that happened from India Inc anywhere. There are some draconian rules. We don’t allow our employees to use USBs, blah, blah, blue, blue. Net result, zero data leakage, absolute privacy and the government comes down very heavily if they get something like this. So they’ve been able to create a safe environment. Move forward. People used to say is India a market? This last week and forget technology companies, if you just walk the floors, you see people like Schneider, you see people like Vertiv, you see others, they are developing products for India. In India, you’re developing products to the world from India and it’s no longer just a cost base.
So if I was to say there’s one thing that I’ve learned in the last week, it is that India is no longer the back office for AI. it is actually the front office for AI for the world and that’s the net summary that I would draw in the entire week that I’ve been here
Thank you, that’s very funny Bill
And I, you know, zooming out to the sort of highest level, one of the things that I really genuinely felt this week that has been very exciting to me is that there is a lot of energy around how to deploy this technology, how to have impact it’s been actually really fun to be in a lot of sessions with students and entrepreneurs that you can really feel the energy and I feel that it has the conversation around governance has come along and felt integrated in a really genuine way as well, if we look at the kind of summit series that kicked off a few years ago at Bletchley, I think it’s fair to say early on the emphasis of the conversation felt very safety and security heavy last year In France, there was a big pivot to trying to think about the opportunity.
And what I see in India this week is a genuine integration of those conversations and a deepening of those conversations. So really, what do we mean when we say impact? What really do we want to see in deploying this technology? And then sort of not taking for granted that, of course, governance actually has to come along with that. You have to really do the deep, hard work around things like multilingual AI. And there’s a real need for a partnership in moving those things forward. And there’s a real need to think about governance steps so that you can have trust in this technology. India actually just passing a law last week thinking about how to mark AI -generated content.
There’s a real sort of recognition that some of those steps are going to be important. And you don’t want to stop or have those steps sort of prevent deployment of the technology or realization of the benefits. But, like, you know, we have to do the deep work together to sort of move. Forward across a dime. A dime. and impact and governance together.
Thank you. Thanks, Amanda. We’ve got a few minutes. If anyone would like to chip in. Great. Hands are going up. The room’s filled, by the way, while we’ve been going along, and it’s been a great conversation. Let’s hand one or two mics out to colleagues around the room, if we can, to the lady here on the front.
Hello? Hello? Okay. Right. Thanks, and I appreciate the comments and the traffic. I think we’ve all got a traffic story. Now, I hear a lot of talk about upskilling, co -creation, which are all very important things. I agree. But what I’m also hearing a lot from, and I’m sure you all are too, is the issue of speed of this technology that could potentially outspace some of this real scenario. So my question to you is, you know, what do we – and this goes to anyone who might want to answer or has some real thoughts on it. What do you think might – be the gaps between that that we would need to address in a transition process between upscaling and real economic displacement.
Who can grab that? Yeah you put the mic Julian you’re gonna give it a go.
It’s a real problem right meaning technology is moving so quickly as I said years ago I would tell young people in technology learn to be the best programmer you can. Now with agentic AI especially with the usage of MCP where you can have multiple agents talking to each other sharing information it’s now learning to be the best user and prompter of the technology understanding the outcomes but there’s gonna be some displacement. It’s you know right now I would tell you AI especially in the security context I can probably eliminate 60 % of the things that humans have to look at today. but there’s still the 40 % where a human has to be involved to make a determination around risk to an entity, whether it’s a government, whether it’s defense, whether it’s a business.
And so it’s really helping them evolve to this next level of user, this next level of programmer, if you want to call it that. And there probably will be some displacement that we just can’t get around.
Gentleman in the front.
I actually have an extension of the same concern that the lady shared. The speed is one aspect, but also I think there’s a whole information arbitrage between the people who are creating and pioneering in the AI space versus the others to whom the information is reaching. And the impact of that on the power polarization and even the democracies. You know, that possibility I sense. And a lot of the conversation that I hear today is assuming that, you know, AI is moving linearly, but I see it moving exponentially. I agree. With a polarizing effect. Yes. Yes. Both. Both the polarizing effect and the effect, you know, like I think 40 % that Serge just spoke about. For me, that 40 % is not really 40%.
It’s just that we want to be very, very careful. But if we were to not care so much about how accurate and how much data standards we have. It could be 100%. You know, it’s very large. You know, I think the displacement can happen very fast. So I’m really concerned about how things are moving. I’m not sure if my concern is being shared by people in the panel.
Anyone want to respond?
I mean, I think we need to focus on AI literacy because, you know, again, the technology is moving so fast. How do we make sure people in their everyday lives? People in the workforce have access to education so they can continue to upskill. and I also think being in academia after having been in the private sector for we won’t go into how many decades but teaching students how to think I think a good student when you’re looking at your career trajectory it’s not just coming out of college with a set of skills but teaching them how to think, how to problem solve and I think it’s really the public -private partnerships that Amanda mentioned with academia is really important because a lot of times the tenured faculty, they don’t know how to teach that to students and bringing people in to tell them this is how you adapt, these are sort of what you’re going to expect in your career and I say this not only from the perspective of being in academia but having two children of my own in their 20s who are just starting their career and sort of expect the unexpected but learn how to be on your toes I think a lot of it is just having the good analytics skills, having good communication skills and if you have those core skills you’re going to be able to adapt and it will carry forward in the future.
Great. I think we’ve got time for one more question. Okay, gentle. Oh, the lady who, sorry, the lady who has the mic. She has the mic.
Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all these tech services that many of these companies are doing. And my question is actually for Amanda, because I think she was the only one who really brought up the digital divide that continues to exist, both in India and across the globe. I actually didn’t feel like I heard very much about how to actually bridge that. Yesterday I didn’t have one of those special passes to go to the events on the 19th, so instead I visited a local nonprofit called the Digital Empowerment Foundation, which has been around for more than 20 years, doing incredible work in rural India.
And they’re simply talking about last -mile Internet connectivity, let alone the enablement or ease. in the critical thinking that Lee just mentioned. So just a few more words on how it is that we can bridge this digital divide and make it more equitable, because I think the more folks are going to be excluded, the more different kinds of problems that we’re going to have.
Yeah, and I think you may have come in after we talked briefly about some of the work that we’re doing to address the digital divide. And for a lot of words, I would point you actually to we published a blog on Wednesday where we talked about investments in five areas that we’re thinking about to close the gaps that we see. And we actually point to the work that we’ve done using our own telemetry to sort of track these gaps and their trajectory and really lifted up our own concerns about the trajectory. And so among the areas of investment, infrastructure is really foundational. And we actually do talk in the blog about of course, infrastructure in terms of like AI.
compute capacity, but actually the fundamentals beyond, like, in terms of connectivity, energy access as really important as well. And then we talked about scaling multilingual and multicultural AI capabilities, really working with local communities on local use cases and the kind of deep work that we can do to sort of help bring the technology to people and see, like, even in agriculture, for example, we at Microsoft Research have done a lot of projects, like, in close collaboration with local communities and try to see, like, how could this serve you and then also learn from how the technology needs to evolve in order to do so better. And basically then also taking a step back and continuing to study diffusion so we understand, like, are our interventions working?
Are they not? If so, what can we learn and how can we improve how we’re intervening?
Okay, so time’s up, everyone. Thank you so much for your contributions and for joining us at different points during the conversation. Thanks to the panelists for a really rich and diverse conversation. It’s been a real pleasure to have you with us. And I think we end with a sense of optimism that no matter what the challenges of the digital divide and those other elements, there’s probably an AI solution to the AI challenges that we’re creating. Thanks. Thank you. Thank you.
Sachin Kakkar from Google illustrated the localisation challenge through the company’s IndIC GenBench initiative, which supports 29 Indian languages, 12 scripts, and 4 language families for fine-tunin…
EventInternational Cooperation and Standards Role of international cooperation and standards Singapore advocates against fragmentation of the global AI ecosystem, emphasizing the need for standards build…
EventInternational Cooperation and Global Standards Need for international cooperation and global standards rather than fragmented national approaches
EventThe discussion on the unintended consequences of rushed AI regulations was a central theme across multiple sessions during the9821st meetingof the Artificial Intelligence Security Council. Several key…
EventThe rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as generative AI and synthetic disinformation. These advancements have had a negat…
Updates– Kristalina Georgieva- Brad Smith 38,000 GPUs available through public-private partnership as common compute facility. Cost is one-third of most other countries. Access provided to students, researc…
EventDevelopment | Economic Microsoft Elevate represents the next chapter of corporate philanthropy, combining technology support, donations, and sales for schools, community colleges, and nonprofits. The…
Event– Tatenda Annastacia Mavetera- Hubert Vargas Picado- Emmy Lou Versoza Delfin Development | Sociocultural Kone argues that AI’s impact extends beyond traditional digital divides to cultural barriers,…
EventFurthermore, it highlights the significance of collaboration between the public and private sectors in future skills training. The example of India’s Future Skills programme, which involves both the g…
EventAmattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. He argues for maintaining this flexibility in normal times, balancing innovation …
EventThe analysis argues for equalizing trust and safety investment. Market concentration is also opposed, with a call for a more balanced approach to internet companies. Contrary to popular belief, the an…
EventThey also highlight the importance of regulations to provide guardrails and prevent potential misuse of AI. However, it is asserted that regulations should not stifle innovation but instead strike a b…
EventMoreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurity. As AI systems become increasingly intelligent and autonomous, there are conce…
EventEvidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental use to routine deployment in malicious cyber operations in 2026. Rather than intr…
UpdatesPatel outlines a three‑layer security approach: protect agents from malicious inputs, protect the world from rogue agents, and enforce runtime security controls. This includes guarding against jailbre…
Event“Let’s figure out what has to be done.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies?diplo-deep-link-text=How+do+you+see+…
EventBenchmarking, Standardization, and Multilingual/Local Contexts
EventAurélien Macé Apparemment, j’ai droit à 6,6 minutes, deux fois plus que les autres, ce qu’on m’a dit. Le thème de vendredi, c’était mettre l’IA au service de l’égalité dans le domaine de la santé. Pou…
EventSociocultural | Human rights | Development Wisniak highlights that AI systems perform poorly for languages and dialects that are not well-represented in training data. She argues that cultural nuance…
Event“Roughly 70 % of AI’s economic value could become concentrated in Western corporations and China if current trends continue.”
The knowledge base notes that some estimates suggest 70 % of AI’s economic value risks being concentrated in Western economies and China under current trends [S1].
“Democratising AI will require intentional design, international collaboration, and coordinated action across research, workforce development, private‑sector partnerships, and robust safety and security measures.”
Additional sources highlight that global AI governance must involve inclusive participation and address concentration of power in a few companies and countries, underscoring the need for coordinated, democratic action [S94].
“Lee Tiedrich presented the second International AI Safety Report, noting progress in evaluation techniques but a persistent gap.”
Lee Tiedrich is cited as emphasizing the need for global collaboration to develop common evaluation standards, indicating awareness of both progress and remaining gaps in AI safety assessment [S10].
“ISO 42 001 is an early AI safety standard and the NIST “zero draft” will feed into future ISO work.”
The knowledge base reports ongoing work to incorporate AI safety assessments into the ISO process, with expectations that drafts will be accepted within ISO standards [S99].
“Regional pre‑standard initiatives such as the Hiroshima AI process foster cross‑regional stakeholder cooperation.”
The Hiroshima process is identified as an instrument to promote collaboration among regional stakeholders on AI governance [S101].
“Regulators often lag behind the rapid pace of AI development, so regulation should follow robust, evidence‑based evaluation frameworks.”
The rapid development of AI is described as presenting unprecedented challenges for slower-moving regulatory frameworks, confirming the lag noted in the claim [S10].
“International agreements and verification technologies will be needed for AI safety at the global level.”
The knowledge base stresses that future AI governance will require international agreements and technical means for verification, adding nuance to the discussion of regulation and standards [S96].
The panel displayed a high degree of consensus around four core themes: (1) the need for globally coordinated AI standards that are culturally and linguistically adaptable; (2) a cautious, evaluation‑first approach to regulation to preserve innovation; (3) extensive public‑private collaboration coupled with robust capacity‑building programmes; and (4) multilingual AI as both an inclusion and security imperative. These shared positions suggest a collective willingness to pursue coordinated, inclusive and technically grounded AI governance frameworks.
Strong consensus across speakers, indicating that future policy and industry initiatives are likely to prioritize collaborative standard‑setting, balanced regulation, large‑scale upskilling and multilingual inclusivity, which together could mitigate concentration of AI value and enhance equitable AI diffusion.
The panel displayed broad consensus on the importance of democratizing AI, building capacity and fostering public‑private collaboration. Disagreements centered on the timing and nature of regulation, the perceived immediacy of AI‑driven job displacement, and the preferred mechanisms for addressing the skills gap. While most participants agreed on the goals of inclusive AI development and security, they diverged on policy sequencing and implementation tactics.
Moderate – the disagreements are substantive but do not fracture the overall consensus. They highlight the need for coordinated policy design that balances innovation, regulation, and rapid upskilling, especially for developing regions.
The discussion was driven forward by a series of pivotal remarks that moved the conversation from abstract concerns about AI concentration to concrete, actionable strategies. Lee Tiedrich’s articulation of the standards‑cultural tension set the stage for debates on localization and continuous governance, which Sachin expanded with the idea of ongoing audits. Amit Chadha’s insider view of incentive‑based upskilling and Amanda Craig’s large‑scale Elevate program offered contrasting but complementary models for closing the skills gap. Julian Waits highlighted the geopolitical reliance on talent, prompting a deeper look at equity and displacement, while Lee’s later emphasis on AI literacy reframed the problem as one of education rather than mere training. Together, these comments created turning points that broadened the scope, introduced new dimensions (data sharing, measurement, talent flows), and steered the panel toward a consensus that collaborative, adaptable, and measurable approaches are essential for democratizing AI benefits.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

