Welfare for All Ensuring Equitable AI in the Worlds Democracies

20 Feb 2026 18:00h - 19:00h

Welfare for All Ensuring Equitable AI in the Worlds Democracies

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by warning that without intervention most of AI’s economic value could become concentrated in Western corporations and China, with estimates that up to 70 % may reside there, and argued that this outcome is not inevitable but must be democratized through intentional design and international collaboration [2-4][6-7].


Lee explained that the newly released international AI safety report highlights progress in evaluation but stresses a gap that can be narrowed by expanding standards such as ISO 42001 and by accelerating work through NIST drafts and regional pre-standard initiatives like the Hiroshima AI process, while also allowing cultural and linguistic customization [33-41][44-46]. Sachin added that simply copying regulations across markets often fails, citing Google’s Indiq GenBench which supports 29 Indian languages as an example of needed localization, and emphasized the necessity of continuous auditing to prevent drift as AI models evolve [48-56].


Building on this, participants described co-creation models where developers and governments act as enablers rather than barriers, with Google promoting open-source frameworks, the secure AI framework (SAIF), tools such as SynthID, and the Coalition of Secure AI (COSI) to support capacity building and workforce upskilling [61-78]. Amit highlighted that excessive regulation can stifle innovation and that his company balances carrots-recognition for patents, papers, and speaking engagements-with budget allocations for training to boost productivity, reporting an increase from 73 % to 83 % utilization [131-133][218-233]. Microsoft’s Amanda detailed the “Microsoft Elevate” initiative, which aims to upskill 20 million Indians by 2030 through partnerships with schools, vocational institutes, and government ministries, and stressed a holistic approach that includes infrastructure, multilingual AI, local deployment, and diffusion measurement [105-126].


The discussion then turned to trust and security, with Brad noting a U.S. fintech survey showing less than 20 % public trust in AI and raising concerns about prompt-injection attacks, especially in low-resource languages [241-244]. Amit explained that attackers can exploit unsupported languages to jailbreak models, and that expanding the ML Commons jailbreak benchmark to include Indic and Asian languages is a step toward mitigating such threats [250-257]. Sachin argued that AI-driven defensive agents can reverse the traditional “defender’s dilemma” by automating routine security work, thereby giving defenders an aggregate advantage over attackers [263-272].


Lee concluded that global cooperation across government, academia, industry, and civil society is essential, and that standardizing data formats and licensing will enable the regional customization needed for AI to support UN Sustainable Development Goals [283-294]. Amit reflected that India’s focus is on grassroots AI impact for farmers, schools, and hospitals, positioning the country as a front-office for AI rather than a back-office [307-313]. Amanda observed that recent weeks have integrated governance with impact discussions, emphasizing multilingual AI, partnership, and recent Indian legislation on AI-generated content as signs of mature, responsible deployment [334-342].


The panel closed on an optimistic note, asserting that despite digital-divide challenges, collaborative AI solutions and continued public-private partnerships can address both technical and societal risks [409-414].


Keypoints


Major discussion points


International collaboration and adaptable standards are essential to prevent AI value concentration in a few Western or Chinese entities. Brad frames the risk of a 70 % concentration of AI’s economic value in those regions and stresses the need for intentional design and global cooperation [2-4][6-8]. Lee highlights the role of ISO and NIST in drafting standards while warning that standards must be customizable for different languages and cultures [33-41][44-46]. Sachin adds that simply copying regulations across borders often fails, underscoring the need to localize standards for diverse markets [48-52].


Public-private capacity-building and upskilling programs are critical to bridge the AI skills gap, especially in developing economies. Amanda describes Microsoft’s “Elevate” initiative, its multi-year commitment to train millions of Indians and its partnership with schools and ministries [105-124][125-126]. Amit explains L&T’s three-pronged approach: collaborating with colleges, upskilling current staff while they remain billable, and incentivising personal research and patent work [149-158][165-174]. Sachin stresses continuous auditing of AI models because a one-time certification cannot keep pace with rapid model evolution [56-57].


A tension exists between global regulation/standards and the need to foster innovation; a co-creation, adaptive approach is advocated. Brad asks whether setting global standards may hinder innovation [79-87]. Lee warns that moving too quickly to regulation can outpace technological change, suggesting a bottom-up, science-first evaluation framework before deciding on rules [89-96]. Sachin argues that global standards should be a flexible “creative tension” that adapts to local constraints such as bandwidth and linguistic diversity [81-87]. Amit echoes the concern that over-regulation can stifle innovation and calls for careful, targeted rules [130-133].


Security, trust, and AI-specific cyber risks (e.g., prompt-injection) require immediate, scalable defenses, including multilingual robustness. Brad notes the public’s low trust in AI-driven financial services and asks for priority actions against threats like prompt injection [237-244]. Amit points out that models weak in low-resource languages become attack vectors, and he cites work on a multilingual jailbreak benchmark to harden systems [250-258]. Sachin describes the development of self-defending AI agents that act like an immune system, aiming to give defenders an aggregate advantage over attackers [263-272].


Localization-multilingual AI, culturally aware data standards, and open data frameworks-is vital for equitable AI deployment. Sachin’s “IndiQ GenBench” demonstrates the need for language-specific evaluation tools [53-55]. Amit stresses that poor support for Indic languages can enable prompt-injection attacks, reinforcing the push for multilingual capabilities [250-258]. Lee calls for voluntary data-exchange foundations and standardized data licenses to reduce friction in cross-regional collaborations [286-294].


Overall purpose / goal of the discussion


The panel convened to explore how the global AI ecosystem can be democratized: preventing concentration of economic value, establishing inclusive standards, building a skilled workforce, ensuring security and trust, and tailoring AI to diverse cultural and linguistic contexts through coordinated public-private and international effort.


Overall tone


The conversation begins with a measured, forward-looking tone emphasizing collaboration and optimism about shaping AI’s future. As the dialogue progresses, it becomes more technical and urgent, addressing concrete challenges such as regulatory trade-offs, skills shortages, and security threats. By the closing remarks, the tone shifts to reflective optimism, acknowledging the rapid pace of change while expressing confidence that coordinated action can deliver equitable, trustworthy AI outcomes.


Speakers

Amit Chadha – Managing Director and CEO, L&T Technology Services – expertise in AI engineering, technology services, and industry leadership. [S1]


Sachin Kakkar – India Site Development, Privacy, Safety and Security, Google – expertise in AI privacy, safety, security, and localization for Indian markets. [S4]


Amanda Craig Deckard – Senior Director, Office of Responsible AI, Microsoft – expertise in responsible AI policy, AI governance, skilling initiatives, and digital inclusion.


Brad Staples – Panel moderator/host – expertise in AI policy discussion facilitation and moderation. [S6]


Lee Tiedrich – Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland; Senior Advisor on the International AI Safety Report – expertise in AI safety standards, international collaboration, and evaluation frameworks.


Julian Waits – Chief Experience Officer, Rapid7 – expertise in cybersecurity, AI security, and AI-driven threat mitigation.


Audience – Various participants (e.g., Yuv from Senegal, Professor Charu from the Indian Institute of Public Administration, Dr. Nazar) – expertise not specified. [S13][S14][S15]


Additional speakers:


Steve – Briefly addressed by Sachin Kakkar (“Thanks, Steve”); role and expertise not identified in the transcript.


Full session reportComprehensive analysis and detailed insights

1. Opening framing (Brad Staples) – Brad warned that, if current trends continue, roughly 70 % of AI’s economic value could become concentrated in Western corporations and China [2-4]. He emphasized that this outcome is not inevitable; democratising AI will require intentional design, international collaboration, and coordinated action across research, workforce development, private-sector partnerships, and robust safety and security measures [6-8].


2. International standards & evaluation (Lee Tiedrich) – Lee presented the second International AI Safety Report, noting progress in evaluation techniques but a persistent gap [33-36]. He highlighted ISO 42 001 as an early standard and described the NIST “zero draft” that will feed into future ISO work [38-41]. Regional pre-standard initiatives such as the Hiroshima AI process were cited as venues for cross-regional stakeholder cooperation [42-46].


3. Localisation & continuous compliance (Sachin Kakkar) – Sachin argued that transplanting regulations across markets often fails, underscoring the need for localisation. He showcased Google’s Indiq GenBench, which supports 29 Indian languages, 12 scripts, and four language families for fine-tuning large language models [52-55]. He warned that one-off audits are insufficient for evolving models and advocated continuous scanning pipelines to prevent temporal drift [56-57].


4. Co-creation model (Sachin Kakkar) – Building on localisation, Sachin described an open-source “Safe SAIF (Secure AI Framework)” and tools such as SynthID, a watermarking technique that flags AI-generated content [63-66][74-76]. He also outlined the Coalition of Secure AI (COSI), an industry partnership expanding across APAC, and stressed capacity-building through threat-intelligence sharing and workforce upskilling [69-78].


5. Global standards vs. regulation trade-off – Lee argued that regulation should follow robust, evidence-based evaluation frameworks, warning that regulators often lag behind rapid technological change [89-95]. Amit cautioned that excessive regulation can stifle innovation and must be applied judiciously [130-133]. Sachin framed the tension as a “creative tension”: global standards should be adapted to local constraints such as bandwidth and linguistic diversity, turning potential hurdles into co-creation opportunities [81-87].


6. Skills-gap & public-private upskilling


Microsoft (Amanda Craig-Deckard)* – The “Elevate” programme aims to upskill 20 million Indians by 2030, combining cloud-compute access, AI tools, and partnerships with schools, vocational institutes, and government ministries. A dedicated “Elevate for Educators” track trains teachers at scale. The effort sits within a five-pillar strategy: hard infrastructure, AI compute capacity, multilingual AI, local deployment, and systematic diffusion measurement [108-118][119-126].


L&T Technology Services (Amit Chadha)* – L&T pursues a three-pronged approach: (i) collaborating with colleges to refresh curricula for the next five years [149-152]; (ii) upskilling current employees while they remain billable, integrating training into project work [156-164]; and (iii) incentivising personal research time, raising patent filings from 50 to 200 per year and increasing staff contributions beyond billable hours from 19 % to 52 %, which lifted productivity from 73 % to 83 % [165-174][230-233].


Rapid7 (Julian Waits)* – Julian noted that Rapid7 relies on talent from abroad to maintain its competitive edge and highlighted that AI can eliminate 60 % of the routine tasks humans currently perform [300-304][366-368].


7. Carrot-vs-stick discussion – Brad asked whether global standards might hinder innovation. Amanda responded with mixed-tactics such as weekly tips and hackathons to encourage adoption [89-95]. Amit advocated a “carrot-only” approach, using patent-glory incentives and budget allocations to motivate compliance [130-133].


8. Trust, security & multilingual vulnerabilities – Brad cited a YouGov survey showing fewer than 20 % of Americans trust AI in financial services [241-244]. Amit explained that models weak in low-resource languages become attack vectors; attackers can jailbreak systems by exploiting unsupported languages such as Tamil [250-252]. To counter this, Google contributed to an expanded ML Commons jailbreak benchmark that now includes Indic and other Asian languages [255-257].


9. AI-driven defensive agents (Sachin Kakkar) – Sachin described emerging AI-driven defensive agents that act like an immune system. He argued that, unlike the traditional “defender’s dilemma,” AI can automate 80 % of routine defensive work, giving defenders an aggregate advantage [263-272].


10. Data-exchange & licensing (Lee Tiedrich) – Lee called for voluntary data-exchange foundations, standard agreements, and Creative-Commons-like licences for data to lower friction in cross-border collaborations [280-285].


11. Closing reflections – Lee stressed that global cooperation across government, academia, industry, and civil society remains vital for mitigating AI risks and achieving the UN Sustainable Development Goals [283-286]. Amit highlighted India’s shift from a “back-office” to a “front-office” AI role, focusing on grassroots impact for farmers, schools, and hospitals [307-313]. Julian warned that the industry’s rapid pace could render today’s skills obsolete within five years, yet reiterated that AI can automate a large share of security tasks while still requiring human judgement [300-304][366-368]. Amanda reiterated that recent weeks have seen genuine integration of governance with impact discussions, citing India’s new law on marking AI-generated content as evidence of mature, responsible deployment [334-342].


12. Audience Q&A – Rita Soni raised digital-divide concerns, referencing the Digital Empowerment Foundation, prompting discussion of Microsoft’s infrastructure and diffusion work [360-376]. An audience member warned of “information arbitrage” between AI creators and broader society, echoing fears that exponential AI growth could outpace up-skilling and exacerbate power polarisation [387-393]. Lee responded by advocating AI literacy, problem-solving skills, and lifelong learning as remedies [387-393].


13. Consensus pillars – The panel converged on four pillars: (1) globally coordinated yet locally adaptable AI standards; (2) evidence-based evaluation before regulation; (3) large-scale public-private capacity-building programmes to close the skills gap; and (4) multilingual AI as both an inclusion and security imperative [6-8][33-41][108-126][263-272]. Points of disagreement centred on the timing and extent of regulation-Amit warned against over-regulation while Lee urged robust technical evaluation first-and on the magnitude of imminent job displacement, with Julian optimistic about AI-assisted roles and an audience member fearing far-greater displacement [89-95][130-133][300-304][360-376].


14. Action items – Expand ISO 42 001 and NIST drafts to cover cultural variations; extend multilingual benchmarks; scale Microsoft Elevate; reinforce L&T’s incentive-based upskilling; develop self-defending AI agents; implement continuous audit pipelines; and establish voluntary data-sharing frameworks [38-41][250-257][119-126][165-174][263-272][280-285].


Session transcriptComplete transcript of the session
Brad Staples

by corporations, by innovators to secure that outcome. And if current trends continue, the majority of AI’s economic value risks being centered in the hands of countries and corporations in the Western economies in China. And some estimates suggest that 70 % of the value could be created and reside in those locations. And I think it’s for us in this context to think a bit about why we don’t need to accept that outcome. It’s by far means not an inevitability. And to democratize the impact of AI, it requires intentional design, it takes international collaboration, and it takes societies coming together to ensure that doesn’t happen. It also takes innovation and research, workforce development, private sector partnerships, and also trust, safety, and security.

And they’re the things we’re going to talk about on the panel today. And my colleagues are extremely well -placed. to share their thoughts and insights on those topics. So let me introduce the panel. We have Amit Chandha, Managing Director and CEO of L &T Technology Services. Good to see you, Amit.

Amit Chadha

Happy to be here.

Brad Staples

Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have you with us. Sachin Kakar from India Site Development, Privacy, Safety and Security at Google. Good to have you with us, Sachin. Thank you for being with us. Lee Tedrick, Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland, Senior Advisor on the International AI Safety Report. Lee, good to have you with us. And last but by no means least, Julian Waits, Chief Experience Officer with Rapid7. Good to have you with us. Good to have you with us. Okay. So without further ado, let’s take a look at international and scientific research collaboration. And, Lee, let me come to you.

Let me pose. Here’s a question. Okay. And, Lee, let me pose. And the second international AI safety report was released just ahead of this conference, something that you’re very much an author of. Let’s start by hearing from you and then maybe, Sachin, I’ll bring you in. What opportunities do you see, Lee, in open international standards to address the technical challenges that we face while also building trust in AI -based systems and services? Which of these, how would you characterize those challenges and which are most critical in a developing country context?

Lee Tiedrich

Yeah, thanks for the question, Brad, and there’s a lot here. So the international AI safety report that I worked on with a panel of about 100 experts was just released. And one of the key takeaways from the report is that while we have made a lot of progress over the past year in evaluations and developing evidence, there’s still a long way to go. There’s a gap. And I think, you know, internationally. International standards organizations and similar efforts is a good way to work together to try to fill some of the gaps. ISO has already released one standard, 42 ,001, which is a good start, but we need to accelerate this, and we need to also recognize the fact that standards and evaluation metrics, you know, there’s a tension.

On the one hand, we want them to be able to apply across borders because we want to enable companies to have responsible technology flow across borders. But on the other hand, it’s really important because we all differ in terms of language and culture that we need to be able to customize them for different cultures, norms, languages. And I think, you know, the standards organizations will continue to play an important role. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. One of the NIST projects is working on what we call the zero draft of trying to create a draft that we could then feed into the ISO process, and NIST is trying to collect stakeholder input into that draft.

And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -standards efforts where different stakeholders across different regions can work together. And I think that the ACs, the AI safety institutes across different countries and how they can coordinate. So I think there’s a lot of work to be done, but I think there’s a lot of avenues where we can collaborate together and make sure that we’re addressing the needs of everybody around the globe. Thank you.

Sachin Kakkar

Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is copy pasting the regulations or standards from, you know, international markets to local markets may not always work. So localizing them, understanding the needs and constraints of the local area. Google launch Indiq GenBench. It’s a test bench for fine tuning. And assessing the. LLM models for local languages, supporting 29 Indian languages, 12 scripts, and 4 language families. So that shows an example of how we need to localize things. The second point is one -time audit or certification may not work as AI evolves. We need a continuous scanning and auditing to make sure we avoid any temporal drift in these standards and the applications.

Brad Staples

So Sachin, let’s build on that. How do governments and developers collaborate in a way that we get the outcome that everyone desires, which is not to see the developed markets race ahead of developing countries? What does that collaboration need to look like?

Sachin Kakkar

Yeah, that’s an interesting question. I think at highest level, the way we think to bridge the gap between AI divide is to move away from traditional, traditional transfer approach. to more co -creation where developers and government coming together and and the underlying goal is that standards and regulations are seen as enablers and equalizers not as barriers or compliance hurdle so three specific dimensions in which we believe developers and government can collaborate and Google specifically focuses on number one is open source frameworks and interoperability and standards second capacity building and third is workforce upskilling and research I’ll quickly unpack each one of them so starting with open source frameworks AI is not new to Google we have been working on AI for past decades and remember Alpha fold and we were the first one to share the transformer paper on which all the LLMs are built when we were building AI we were also focusing on best AI practices and safety practices on AI And we have open sourced all the best practices to keep AI safe.

Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk. And India’s digital transformation is characterized by DPI, the digital public infrastructure on which Aadhaar and UPI are built. So they can actually leverage some of these secure AI framework to make sure the malware attacks and the vulnerability in open source components are taken care. Now, standards is one thing. The collaboration goes beyond to adoption of them. And Google has co -built the COSI, Coalition of Secure AI Framework, with various industry partners. And this is what we are expanding in APAC, including India. Now, we are also committed to capacity building. With the government. And which means we need to provide tools and infrastructure, not just standards.

So we are proactively sharing the threat intelligence. We are building tools like SynthID and sharing with the community abroad. SynthID is a watermark technique which goes into the text, image, video, audio, and it can tell you whether it is AI -generated content. So some of these tools are also helping us to make sure our commitment towards standards goes into actual adoption. And finally, upskilling workforce, digital literacy, working with government to make sure the vulnerable section of the society, like elderly and teenagers, are aware of some of these challenges. And giving grants to institutes like IITs to push the frontier of research, like PQC, post -quantum cryptography, are other areas of collaboration between AI developers and the government and academia.

Brad Staples

Let me just ask you both a question. Is there a trade -off between setting global standards and regulation? ensuring the right environment for innovation and collaboration?

Sachin Kakkar

Oh, yeah, that’s right. And that’s where you can start with the global regulations but then adapt them to the local constraints. Like we have bandwidth constraints in India. We have linguistic diversity. And therefore, the global standard should not become a hurdle for the young startups in India. Rather, they become co -creators in enabling the innovation that can happen and then evolve from there. So it’s a creative tension, and I think the best way is to be adaptive in this situation and eventually evolve to the international standard.

Brad Staples

How do you see this interplay, Lee?

Lee Tiedrich

Yeah, I think, I mean, kind of in my work, you know, both in government, academia, and I spent 30 years working with the private sector, I think sort of figuring out the standards and the standards that are in place and the values that are in place. evaluation techniques is really key. You know, how are we going to evaluate these systems so we can, they can meet a certain threshold of safety. And then I think the question kind of comes in, you know, afterwards, once we know what it is, you know, should there be regulation or not? You know, I worry a lot of times that when we go too quickly toward the regulation, you know, the best of intentions may be there, but, you know, the technology is moving so quickly, regulators don’t necessarily know how to style the regulations to achieve the goal.

And I think sort of working from the bottom up with the science, developing the evaluation technique, taking into account that we do need to socialize, you know, customize for local markets is really important. And then we can get to the question of, well, should there be a regulation or not? And that’s where, you know, different countries may have different answers, but at least we’re working from a common technical framework and evaluation framework to assess systems. Thank you.

Brad Staples

Thank you both. Let’s make a shift to… The conversation towards more public -private… collaboration, which I think we know is at the heart of driving the success that everybody’s looking for. And Sachin was talking a little bit about capacity building. Maybe we focus on those two elements. And Amanda, I’ll come to you and then to Amit. So there’s a persistent skills gap in AI. It’s very apparent and a lot’s being done to try and bridge that here in this country. How are your, has your organization, and I’ll come to you Amit with the same question, how are your organizations grappling with that challenge and also collaborating with government to help to narrow that skills gap?

Amanda Craig Deckard

Thank you. Yes, skills gap is really important. We see it as part of the sort of foundational infrastructure for what we need to work on together as Microsoft with other industry partners, government partners, other local partners. It’s going to take a whole community really working together to do this at scale. And just to take a step back for a moment briefly before I talk more specifically about skills, you know, we kind of see this as part of a holistic effort where you kind of need to support all of the enabling infrastructure for AI deployment, kind of from from the infrastructure layer all the way through sort of realizing value in local use cases. So we actually published on Wednesday a blog from our president, Brad Smith, our chief responsible AI officer, Natasha Crampton, where we talk about sort of five areas where we’re really focused on investing to kind of close the gaps between AI diffusion and the global north and global south.

So we talk about, like, hard infrastructure investment, right, in terms of connectivity, AI compute capacity, scaling is the second part of that plan. And the third part is really thinking about multilingual, multicultural AI capability. And the fourth is really working with local partners on local AI deployment and really what we can learn and what’s going to serve local communities, also what we can learn through that process around how we need to adapt the technology so it’s ready for those local use cases. And then really measuring diffusion so that we actually understand how things are going and how we can do that. And then really measuring diffusion so that we actually understand how things are going and how we can do that.

And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. So that’s the kind of holistic approach that we’re thinking about for public -private partnership. And looking at skilling more specifically, we actually have a new sort of initiative that we launched last July at Microsoft called Microsoft Elevate, which is really bringing together a number of ways that we engage with a community that is going to also be part of skilling everyone at scale, so sort of nonprofit communities, schools, and actually ensuring that they’re equipped with the technology itself, so with cloud compute access and with access to AI.

And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we are really trying to do this at scale. I would say specifically for India is, you know, we last year, early last year, we made this commitment to scale up 10 million Indians by 2030. This year, we upscaled 5 .6 million Indians, and so we actually doubled that commitment to 20 million people by the end of 2030. And one of the ways that we’re doing that is we’re actually, we just announced this week a new Elevate for Educators in India program where we’re partnering with local schools, with vocational institutes, with higher education institutions to sort of teach the teachers, right? So you can actually work at scale, and we’re working with a number of Indian government ministries in this program to figure out, you know, what, how we can ensure that we have tailored programs for all of those different communities and that we’re thinking holistically about how.

You know, we, across those different sort of educational paths, are really meeting people where they are and equipping them to kind of do the next powerful thing with AI.

Brad Staples

Thanks, Amanda. And as a business, L &T Tech Services, I mean, part of L &T originating here in India, but now very much involved in global markets. How are you tackling this in terms of addressing the skills gap?

Amit Chadha

Sure. So thank you. So before I go to skill gap, I do want to make a point on the regulation part. I do believe that too much of regulation can stifle innovation as well. So we’ve got to be careful on how much we do and where do we take it. And then the second part, of course, is to do regulation of traffic control in Delhi for our next event that we have. I think all of us will agree. Let’s get down to skills in a second now. I had to say that because it was a mess in the last two days. I’ve got pictures of myself in an auto rickshaw as well. So if we get down to skill gap, I want to address this three ways.

So I am responsible. I run a company which is potentially India’s first, engineering intelligence company with about 25 ,000 employees. I’ve been CEO for five years. When I took over, we used to be about 15 ,000 employees. We’re about 14 now, we’re about 25 ,000 employees. So, we look at skill gap and I look at skill levels. Three things you have to think about. Whatever work we’re doing in engineering consulting today, I want to say 40 to 50 % of that is new and built in the last five years, did not exist. I also want to say that whatever we are doing today, 60 % will be gone in about three to five years time. That’s the rate and pace of change. So, while my colleague from Microsoft spoke about skilling school stem as well as colleges, we’re doing two different things to stay current with the changing dynamics or three things.

One, we are actually reaching out to colleges. In the last year of their curriculum and we are making sure that the curriculum is going to be in the last five years. in India is contextual to what the industry needs. So we are sending our employees to teach. We are using CSR hours. We are doing all of that to build that up. We are actually participating with NASSCOM as well to be able to do that in the skill development. The second thing we are doing is upskilling our own employees. Now, again, in a developed economy, it’s very simple that you hear these layoffs that happen all the time and they are not because people don’t have work but because the skill is redundant.

So let’s go ahead and get a new set of skills. In an Indian context, my colleague here spoke about that very nicely. You can’t cut and paste. You fire a thousand people, you will actually end up spending half your working hours plus more with the labor commissioner here locally. You can’t do that. So you have to be able to skill people up while they are in the workforce. Now, one thing is developing curriculum, developing modules for them. to go through but the second part is actually making them do it so and normally in a consulting company you would send people to get get coached and do upskilling when they are not billable we actually doing it while they are billable because when they become non billable that’s not when you want it you want it before that right so that’s and it’s a major shift in how we’ve been operating the third thing that we are tracking as an engineering and a technology company is how much of personal time is the employee spending on technology development efforts beyond billing hours to the client so you come in and spend 40 hours right and that’s what you normally work now if you spend another three hours to write a technology paper you file a patent you actually go speak at a symposium all that is towards technology effort beyond billable hours.

The percentage of workforce within the company five years ago that did that was at 19%. Today, that number stands at 52 % of our workforce spends time, personal time to go spend time on technology beyond billable hours. And the net result of that has been we used to file 100 patents per year. We have gone from there, sorry, we used to file 50 patents per year. We have gone to filing 200 patents per year. So the point is that so again, summarizing, one, reach out to the local ecosystem and do it and spend the last year with them. That’s the hook in. Second, upskill the workforce within. And third, beyond just money, find a bigger purpose like technology or betterment of human race with technology to motivate your workforce to actually spend time on that.

And I think that’s what we’ve been doing and we think will be helpful. One last thing and we keep discussing India. But if I look at the US today, and I’ve lived there for 27 years now, is we will need schools to start mandating a certain level of STEM education that has to be done. Today, both my boys went to public schools in Virginia. I can tell you that in some schools, it’s broken. And we don’t do that in the US. We don’t do it in parts of Europe. We will continue to look at different countries for skills. And that is not where we want to be in 20 years time. I’m sorry. Jump in. Jump in, Julius.

Julian Waits

I was going to agree with what you just said. Because Rapid7, like your company, of course, we’re a software company. We’ve basically mandated the use of agentic technologies by our employees, especially the ones in developing countries or countries that aren’t as developed as the United States. What I would tell you also on the education system, which is unique to the US, which is what makes India special. And that’s why we’re in such a wonderful place. It’s because of the technology. we’re so far behind, we’re forced to use labor in other societies that appreciate the use of STEM technology and where it’s embedded in the way that they learn. We have no choice. If we didn’t have foreign workers in the U .S., we would fall behind the rest of the world.

You don’t hear that too often.

Brad Staples

Let me just probe a little bit on this. How much is carrot and how much is stick when you’re looking to upskill the workforce and bring them into more of an AI mindset? You’ve got a very bold program at Microsoft reaching across colleges, but you’re also active, I know, in creating the capabilities within the workplace. How much of this, to both of you, is carrot or stick? I was at a dinner in D .C. a few weeks ago where the head of a large media group had told his team they had to be two times more productive by the end of 25 using AI. to stay in their roles and 10 times more productive by the end of 26.

That was an expectation. But it was set very much as a minimum standard and goal. They were putting training programs in place, but there was a clear metric to achieve. What’s your perspective based on how you’ve seen this work?

Amanda Craig Deckard

You mean internally?

Brad Staples

Either within Microsoft or within the companies that you collaborate with in training.

Amanda Craig Deckard

In our experience, I think we are much more leaning in the direction of using CareReds. So we have a lot of programs internally that are a mix, I think, in terms of tactics that’s important. Both kind of like, here’s a day -long training or a week -long training program, right? Which I think is really valuable. It gives you an opportunity to really dig in. But also really difficult. Difficult to find the time for. And so we actually have weekly tips. for how colleagues that are in similar roles are using Copilot, for example, internally to have more efficiency in their work. And I feel like that’s the kind of thing where, you know, is that skilling, is that training?

I don’t know, but it certainly is helpful because that’s the kind of thing that in my day -to -day job I can look to and integrate much more easily. And the other thing that we’ve started doing is hackathon -type exercises internally that are not just oriented towards engineering communities, but actually our corporate external legal affairs group, which is not just lawyers, but is a lot of lawyers, for example, having a hackathon that’s really meeting that community where we are and building a Copilot to serve our kind of day -to -day work. And so a lot of, like, different kind of carrot approaches is what we’re doing internally and where we see, I personally can say, like, I feel especially the latter two, it’s just hard to find, like, time to do a deep training program.

But if you integrate sort of into your day -to -day work, make it easy with these kind of carrots, you can really start seeing the impact, and that motivates you to use the technology more.

Amit Chadha

So, stick is out of the window, you can’t do that anymore, right? But we use carrots and budgets. Okay? When I say carrots, it’s basically appealing to the individual now and their glorification. So if it’s a patent, you’re filing it. The company doesn’t own it, you own it, right? If there’s a paper, you’re writing it. If you’re speaking at a symposium, you’re doing it, right? And that allows them to think. And then we’ve actually spent a lot of time through HR to try and explain that with the pace of change of technology, if you don’t upscale, you don’t change, you actually are facing extinction in about five, ten years’ time. Gone are the days where you can be there on the same technology for 30 years, will not work, right?

So we home in the message, provide that, and then provide the push. we glorify people that file patents, we glorify them within the company so that’s one. Second when I come to budgets, we actually leverage budgets with our segment heads. So they’re given budgets, they’re given training budgets, we also provide them headcount budgets and say can’t exceed. So we’ve been able to actually improve productivity with AI so we used to run on a utilization of productivity basis the metrics all service companies track at about 73 % five years ago. We’re already at 83 % and I think I can push this up another 2 % in terms of productivity levels in the company again leveraging AI and that’s the budget approach that we use but with the seniors.

So it’s a mix of both if I may to be able to manage this and motivate this. But it’s an ongoing exercise.

Brad Staples

It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more about security and trust and come to you Julian if I can. So I think we’ve recognized and we’ve heard it in different conversations this week that there’s a trust deficit around the use of AI, certainly in a public context. There is some fear, suspicion, and anxiety in a global context. I’m not talking just about India. YouGov carried out a survey in the U .S. last month, and in the context of fintech, they found that less than 20 % of Americans trust AI in financial services. And they’re also sort of struggling, I think, with some of the cybersecurity questions and issues, which you’re very well placed to address.

So if public trust in AI remains fragile and AI -specific cyber risks are growing, which they clearly are, what are the immediate steps that industry should prioritize to counter those threats? And… Things like prompt injection attacks. How can these solutions be scaled? Thank you. particularly for developing countries?

Amit Chadha

of seven. So other than the incentives that we’re giving you to learn these technologies, which of course is to the company’s benefit, it’s to your benefit because these skills that you’re learning and that you’re going to be using will translate to the next thing that you do, and it makes you that much better. If we do enough of that, not only are we helping the employees, but we’re helping the societies and the ecosystem that they live in, including in India. I wanted to add one additional area that we’re really focused on to address the kind of AI cyber threats, particularly relevant in India and other areas in the global south. I mentioned that one of the areas that we’re focused on is multilingual and multicultural AI capabilities, and one of the most important foundational reasons for doing that, of course, is that you have an AI that works well.

and in different languages and cultural context is reliable performs well. Another reason is also that AI that is not robust and it’s multilingual and multicultural capabilities does have additional security weaknesses. You mentioned prompt injection attacks and you know one way in which you can think about a prompt injection attack is basically if you have an AI system and you have the sort of safety system around that, someone who is misusing the technology can sort of try to break that safety system or get around it and one of the ways that attackers do that is by using languages that are not well supported in that model or system right so if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages.

Tamil, for example, or some other sort of language that is not really built in to how the model performs, if companies aren’t attuned to that, then an attacker could use that language and jailbreak the system, basically get around the safety system. And so it’s just another reason why it’s really important from our perspective, and we’re partnering with a lot of others in industry and government, so this comes back to a public -private partnership opportunity, to really work on multilingual and multicultural AI capabilities. One of the things that we announced this week is actually there’s a benchmark from an organization called ML Commons, which is a jailbreak benchmark. It’s actually measuring how robust systems are against that kind of prompt injection attack technique.

And we worked with a number of others to really build out the current version of that. which is really English -specific, to include multiple Indic languages and Asian languages in terms of its capability. It’s not going to solve the problem. It’s one step of what we see in the right direction. But I just want to draw that sort of really specific area of focus in India and other areas for thinking about the kind of AI and cyber threats.

Brad Staples

That’s wonderful. Thank you.

Sachin Kakkar

Can I add a point?

Brad Staples

Sure.

Sachin Kakkar

So this is about the rise in prominence of AI agents. And we have been constantly investing in self -defending systems, just like a human immune system. As agents grow and they can – the scale and speed at which they can attack infrastructure, the hospitals, the energy grids, we need agents on the other side. And this becomes AI versus AI story, where we are smartly inventing agents. And we believe, first time, with AI. We can reverse the defender’s dilemma. So the dilemma, many of you might already know, attackers have to find just one open wallet in this crowd, but defenders have to protect all the wallets all the time. And first time, AI will give us aggregate advantage to defenders because majority of defenders’ time, 80%, goes in drudgery and skunk work.

And AI can actually automate and uplift that work. So the entire stack of defenders can improve and uplift with AI. And we believe that we’ll be able to build a self -defending adaptive system which can protect us from various vulnerabilities.

Brad Staples

Wonderful. Thank you. Well, we’re drawing towards the close of the session, and it’s been a very rich conversation. I just wanted to take a step back and ask you all, you’ve been – most of you have been here all week. And you’ve heard a whole host of different interventions and some very significant investments and initiatives. What are your conclusions? What’s changed? changed in your perspective when you look at AI for the future from your own vantage point? What’s this event given you a new perspective on or crystallized in your minds? Maybe, let me go back to Lee. Do you want to share your thoughts?

Lee Tiedrich

It’s reinforced for me, you know, something I’ve seen through a lot of my international work with OECD, with global partnership on AI, just the need for the global cooperation, and not just at the government level, but among all different types of stakeholders, you know, within academia, within industry, within civil society, and working together. And I think, you know, we can sort of pause at this moment and say, you know, if you look at the safety report, we’ve made a lot of progress over the last few years, but we need to continue to work together and not just focus on the harms and the risks that AI can have, but think about the benefits. You know, if we are able to leverage AI, we might be able to, you know, help achieve some of the UN Sustainable Development Goals.

I think one other thing I want to just kind of enter into the mix, you know, the customization of AI for different regions also depends upon data. And a lot of my work has focused on, you know, how do we create voluntary foundations so we can exchange data more easily? Like right now, we don’t have data standardization. So if I want to exchange my data with any of you, my data may be in a different format. As a former lawyer, a lot of my work is also focused on we don’t even have standard agreements. So if we want to exchange data, how can we easily transact and not have all that friction and transaction costs?

You know, we don’t have the Creative Commons licenses right now for data. And if we’re ever going to get to that localization and that ideal point where we’re customizing for different cultures, we’re going to have to have a lot of different tools. we’re going to have to figure out ways where we can voluntarily and responsibly share data. And this has been part of the discussion, but hearing the conversations over the past week kind of underscore the need to continue to advance that work while we work on some of the other topics that we’ve been discussing.

Brad Staples

Great. Julian?

Julian Waits

More than anything, what this week has taught me is I’m old and this industry is moving.

Brad Staples

Okay, so stop saying you’re old. You don’t look old. You look great.

Julian Waits

This industry is moving so quickly. Again, skills that are needed and considered to be important today will no longer be necessary in five years. And if the workforce and if the users of the technologies aren’t evolving with it, we all fall behind. So what is a great advantage and opportunity in using AI, the danger is it also cannot. obsolete at the same time. And we need to be very careful of that and how we use it and then how we help, hopefully, to promote this throughout the world in a way that makes it equitable for everyone.

Brad Staples

Great. Thank you. Amit, Sachin, any reflections?

Amit Chadha

Yeah, I think one of my big takeaway from this week was some parts of the world are focused on AI as an influence. Some part of the world is focused on governance of the AI. I think India is focused on impact of AI at the grassroots level. Thinking about how AI will impact a farmer or a small school or an NGO or a small hospital has been the focus. And it resonates with me because mission of my team is to keep everyone safe at scale. And when I say everyone, it’s not just about Google or Alphabet or not just about our billions users. but the entire society, everyone at scale, and how to make sure we become the architect and not just the consumer of AI and make sure it reaches to the grassroots level is one area to think about.

Sachin Kakkar

I agree with that. So, of course, outside of the traffic bit, right? What you learn, if you ask me, in the whole week that I’ve seen is that if I, and I’ve been in this business for, I don’t want to date myself, so say a couple of decades and we leave it there. But people used to say India is a back office. That’s how it started in 90s. People said India is a back office. Y2K happened and they said the IT industry will be over, right? Because Y2K, that’s all there is. Today, the IT industry, engineering industry together is $600 billion. We move forward. People said, are you going to take data? And are you going to?

Is data going to get leaked? and then COVID came and India proved yet again there was not a single data leakage that happened from India Inc anywhere. There are some draconian rules. We don’t allow our employees to use USBs, blah, blah, blue, blue. Net result, zero data leakage, absolute privacy and the government comes down very heavily if they get something like this. So they’ve been able to create a safe environment. Move forward. People used to say is India a market? This last week and forget technology companies, if you just walk the floors, you see people like Schneider, you see people like Vertiv, you see others, they are developing products for India. In India, you’re developing products to the world from India and it’s no longer just a cost base.

So if I was to say there’s one thing that I’ve learned in the last week, it is that India is no longer the back office for AI. it is actually the front office for AI for the world and that’s the net summary that I would draw in the entire week that I’ve been here

Brad Staples

Thank you, that’s very funny Bill

Amanda Craig Deckard

And I, you know, zooming out to the sort of highest level, one of the things that I really genuinely felt this week that has been very exciting to me is that there is a lot of energy around how to deploy this technology, how to have impact it’s been actually really fun to be in a lot of sessions with students and entrepreneurs that you can really feel the energy and I feel that it has the conversation around governance has come along and felt integrated in a really genuine way as well, if we look at the kind of summit series that kicked off a few years ago at Bletchley, I think it’s fair to say early on the emphasis of the conversation felt very safety and security heavy last year In France, there was a big pivot to trying to think about the opportunity.

And what I see in India this week is a genuine integration of those conversations and a deepening of those conversations. So really, what do we mean when we say impact? What really do we want to see in deploying this technology? And then sort of not taking for granted that, of course, governance actually has to come along with that. You have to really do the deep, hard work around things like multilingual AI. And there’s a real need for a partnership in moving those things forward. And there’s a real need to think about governance steps so that you can have trust in this technology. India actually just passing a law last week thinking about how to mark AI -generated content.

There’s a real sort of recognition that some of those steps are going to be important. And you don’t want to stop or have those steps sort of prevent deployment of the technology or realization of the benefits. But, like, you know, we have to do the deep work together to sort of move. Forward across a dime. A dime. and impact and governance together.

Brad Staples

Thank you. Thanks, Amanda. We’ve got a few minutes. If anyone would like to chip in. Great. Hands are going up. The room’s filled, by the way, while we’ve been going along, and it’s been a great conversation. Let’s hand one or two mics out to colleagues around the room, if we can, to the lady here on the front.

Audience

Hello? Hello? Okay. Right. Thanks, and I appreciate the comments and the traffic. I think we’ve all got a traffic story. Now, I hear a lot of talk about upskilling, co -creation, which are all very important things. I agree. But what I’m also hearing a lot from, and I’m sure you all are too, is the issue of speed of this technology that could potentially outspace some of this real scenario. So my question to you is, you know, what do we – and this goes to anyone who might want to answer or has some real thoughts on it. What do you think might – be the gaps between that that we would need to address in a transition process between upscaling and real economic displacement.

Brad Staples

Who can grab that? Yeah you put the mic Julian you’re gonna give it a go.

Julian Waits

It’s a real problem right meaning technology is moving so quickly as I said years ago I would tell young people in technology learn to be the best programmer you can. Now with agentic AI especially with the usage of MCP where you can have multiple agents talking to each other sharing information it’s now learning to be the best user and prompter of the technology understanding the outcomes but there’s gonna be some displacement. It’s you know right now I would tell you AI especially in the security context I can probably eliminate 60 % of the things that humans have to look at today. but there’s still the 40 % where a human has to be involved to make a determination around risk to an entity, whether it’s a government, whether it’s defense, whether it’s a business.

And so it’s really helping them evolve to this next level of user, this next level of programmer, if you want to call it that. And there probably will be some displacement that we just can’t get around.

Brad Staples

Gentleman in the front.

Audience

I actually have an extension of the same concern that the lady shared. The speed is one aspect, but also I think there’s a whole information arbitrage between the people who are creating and pioneering in the AI space versus the others to whom the information is reaching. And the impact of that on the power polarization and even the democracies. You know, that possibility I sense. And a lot of the conversation that I hear today is assuming that, you know, AI is moving linearly, but I see it moving exponentially. I agree. With a polarizing effect. Yes. Yes. Both. Both the polarizing effect and the effect, you know, like I think 40 % that Serge just spoke about. For me, that 40 % is not really 40%.

It’s just that we want to be very, very careful. But if we were to not care so much about how accurate and how much data standards we have. It could be 100%. You know, it’s very large. You know, I think the displacement can happen very fast. So I’m really concerned about how things are moving. I’m not sure if my concern is being shared by people in the panel.

Brad Staples

Anyone want to respond?

Lee Tiedrich

I mean, I think we need to focus on AI literacy because, you know, again, the technology is moving so fast. How do we make sure people in their everyday lives? People in the workforce have access to education so they can continue to upskill. and I also think being in academia after having been in the private sector for we won’t go into how many decades but teaching students how to think I think a good student when you’re looking at your career trajectory it’s not just coming out of college with a set of skills but teaching them how to think, how to problem solve and I think it’s really the public -private partnerships that Amanda mentioned with academia is really important because a lot of times the tenured faculty, they don’t know how to teach that to students and bringing people in to tell them this is how you adapt, these are sort of what you’re going to expect in your career and I say this not only from the perspective of being in academia but having two children of my own in their 20s who are just starting their career and sort of expect the unexpected but learn how to be on your toes I think a lot of it is just having the good analytics skills, having good communication skills and if you have those core skills you’re going to be able to adapt and it will carry forward in the future.

Brad Staples

Great. I think we’ve got time for one more question. Okay, gentle. Oh, the lady who, sorry, the lady who has the mic. She has the mic.

Audience

Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all these tech services that many of these companies are doing. And my question is actually for Amanda, because I think she was the only one who really brought up the digital divide that continues to exist, both in India and across the globe. I actually didn’t feel like I heard very much about how to actually bridge that. Yesterday I didn’t have one of those special passes to go to the events on the 19th, so instead I visited a local nonprofit called the Digital Empowerment Foundation, which has been around for more than 20 years, doing incredible work in rural India.

And they’re simply talking about last -mile Internet connectivity, let alone the enablement or ease. in the critical thinking that Lee just mentioned. So just a few more words on how it is that we can bridge this digital divide and make it more equitable, because I think the more folks are going to be excluded, the more different kinds of problems that we’re going to have.

Amanda Craig Deckard

Yeah, and I think you may have come in after we talked briefly about some of the work that we’re doing to address the digital divide. And for a lot of words, I would point you actually to we published a blog on Wednesday where we talked about investments in five areas that we’re thinking about to close the gaps that we see. And we actually point to the work that we’ve done using our own telemetry to sort of track these gaps and their trajectory and really lifted up our own concerns about the trajectory. And so among the areas of investment, infrastructure is really foundational. And we actually do talk in the blog about of course, infrastructure in terms of like AI.

compute capacity, but actually the fundamentals beyond, like, in terms of connectivity, energy access as really important as well. And then we talked about scaling multilingual and multicultural AI capabilities, really working with local communities on local use cases and the kind of deep work that we can do to sort of help bring the technology to people and see, like, even in agriculture, for example, we at Microsoft Research have done a lot of projects, like, in close collaboration with local communities and try to see, like, how could this serve you and then also learn from how the technology needs to evolve in order to do so better. And basically then also taking a step back and continuing to study diffusion so we understand, like, are our interventions working?

Are they not? If so, what can we learn and how can we improve how we’re intervening?

Brad Staples

Okay, so time’s up, everyone. Thank you so much for your contributions and for joining us at different points during the conversation. Thanks to the panelists for a really rich and diverse conversation. It’s been a real pleasure to have you with us. And I think we end with a sense of optimism that no matter what the challenges of the digital divide and those other elements, there’s probably an AI solution to the AI challenges that we’re creating. Thanks. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Roughly 70 % of AI’s economic value could become concentrated in Western corporations and China if current trends continue.”

The knowledge base notes that some estimates suggest 70 % of AI’s economic value risks being concentrated in Western economies and China under current trends [S1].

Additional Contextmedium

“Democratising AI will require intentional design, international collaboration, and coordinated action across research, workforce development, private‑sector partnerships, and robust safety and security measures.”

Additional sources highlight that global AI governance must involve inclusive participation and address concentration of power in a few companies and countries, underscoring the need for coordinated, democratic action [S94].

Confirmedhigh

“Lee Tiedrich presented the second International AI Safety Report, noting progress in evaluation techniques but a persistent gap.”

Lee Tiedrich is cited as emphasizing the need for global collaboration to develop common evaluation standards, indicating awareness of both progress and remaining gaps in AI safety assessment [S10].

Confirmedmedium

“ISO 42 001 is an early AI safety standard and the NIST “zero draft” will feed into future ISO work.”

The knowledge base reports ongoing work to incorporate AI safety assessments into the ISO process, with expectations that drafts will be accepted within ISO standards [S99].

Confirmedmedium

“Regional pre‑standard initiatives such as the Hiroshima AI process foster cross‑regional stakeholder cooperation.”

The Hiroshima process is identified as an instrument to promote collaboration among regional stakeholders on AI governance [S101].

Confirmedhigh

“Regulators often lag behind the rapid pace of AI development, so regulation should follow robust, evidence‑based evaluation frameworks.”

The rapid development of AI is described as presenting unprecedented challenges for slower-moving regulatory frameworks, confirming the lag noted in the claim [S10].

Additional Contextmedium

“International agreements and verification technologies will be needed for AI safety at the global level.”

The knowledge base stresses that future AI governance will require international agreements and technical means for verification, adding nuance to the discussion of regulation and standards [S96].

External Sources (102)
S1
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Amit Chadha- Managing Director and CEO of L&T Technology Services
S2
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And they’re the things we’re going to talk about on the panel today. And my colleagues are extremely well -placed. to sh…
S3
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have y…
S4
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Sachin Kakkar- India Site Development, Privacy, Safety and Security at Google
S5
Welfare for All Ensuring Equitable AI in the Worlds Democracies — – Amanda Craig Deckard- Amit Chadha – Sachin Kakkar- Amanda Craig Deckard- Amit Chadha – Sachin Kakkar- Julian Waits- …
S6
S7
Announcement of New Delhi Frontier AI Commitments — -Brad: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S8
Keynote-Brad Smith — -Brad Smith: Role/Title: Vice Chair and President of Microsoft; Areas of expertise: Technology policy, privacy, cybersec…
S9
Welfare for All Ensuring Equitable AI in the Worlds Democracies — – Lee Tiedrich- Amanda Craig Deckard – Lee Tiedrich- Sachin Kakkar
S10
Agents of Change AI for Government Services & Climate Resilience — – Lee Tiedrich- Srinivas Tallapragada Tiedrich advocates for developing comprehensive global standards through internat…
S11
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have y…
S12
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have y…
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Well, I would like to stress that flexibility is key, because we don’t know what applications will be there…
S17
Responsible AI for Children Safe Playful and Empowering Learning — For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But …
S18
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — The panel reached consensus on the need for fundamental educational reform to prepare students for an AI-integrated futu…
S19
Education meets AI — They stressed that understanding where students currently stand in terms of education and adapting teaching methods acco…
S20
Opening of the session — El Salvador: Thank you, Chair. El Salvador, thank you for convening this session. For my country, it is essential to …
S21
HIGH LEVEL LEADERS SESSION IV — Cooperation among stakeholders, including the government, industry, academia, and civil society, is seen as crucial to a…
S22
WS #199 Ensuring the online coexistence of human rights&child safety — The conversation also touched on the global nature of the problem, the importance of considering victims’ perspectives, …
S23
What is it about AI that we need to regulate? — Multiple sessions highlighted the dangers of simply copying governance models without adaptation. InDay 0 Event #257, Lu…
S24
The Tokenization Economy — However, it was noted that the principle of ‘same activity, same risk, same regulation’ presents challenges when it come…
S25
Digital Public Goods and the Challenges with Discoverability | IGF 2023 — Nonetheless, the path to widespread adoption of open-source software necessitates capacity development across multiple d…
S26
Open Forum #66 the Ecosystem for Digital Cooperation in Development — Tale Jordbakke: Sure. I do think that we as a government agency can play a role. Firstly, by being clear on that the pol…
S27
Laying the foundations for AI governance — Dawn Song: Yeah, that’s a great question. I think in AI safety and security, we are facing huge challenges. The field is…
S28
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that ha…
S29
Keynote by Uday Shankar Vice Chairman_JioStar India — This comment is transformative because it reframes India’s role from service provider to global leader. The distinction …
S30
From India to the Global South_ Advancing Social Impact with AI — AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if…
S31
Opening — There is a need to strike the right balance between fostering innovation and implementing regulation in the field of AI …
S32
E-commerce and Sustainability: an overlooked nexus (Brazilian Center for International Relation – CEBRI) — They caution against excessive regulation, as it may stifle innovation and economic progress, particularly in developing…
S33
Microsoft details threat from new AI jailbreaking method — Microsoft haswarnedabout a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmfu…
S34
AI Meets Cybersecurity Trust Governance & Global Security — Udbhav highlights that large language models are inherently probabilistic, which makes them vulnerable to prompt‑injecti…
S35
How to make AI governance fit for purpose? — International Cooperation and Standards Role of international cooperation and standards Singapore advocates against fr…
S36
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — – **Balancing Global Cooperation with Regional Diversity**: Extensive discussion on how to achieve policy interoperabili…
S37
Smart Regulation Rightsizing Governance for the AI Revolution — The speakers demonstrated strong consensus around pragmatic, collaborative approaches to AI governance that balance glob…
S38
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Kristalina Georgieva- Brad Smith 38,000 GPUs available through public-private partnership as common compute facility….
S39
What policy levers can bridge the AI divide? — – Tatenda Annastacia Mavetera- Hubert Vargas Picado- Emmy Lou Versoza Delfin Development | Sociocultural Kone argues t…
S40
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — Microsoft Elevate represents the next chapter of corporate philanthropy, combining technology support, donations, and sa…
S41
WS #162 Overregulation: Balance Policy and Innovation in Technology — Amattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. H…
S42
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis argues for equalizing trust and safety investment. Market concentration is also opposed, with a call for a …
S43
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurit…
S44
Secure Finance Risk-Based AI Policy for the Banking Sector — “And it should be seen as a, it should be seen as an instrument.”[6]. “That can be addressed only through the governance…
S45
Ten cybersecurity predictions for 2026 from experts: How AI will reshape cyber risks — Evidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental u…
S46
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S48
WS #279 AI: Guardian for Critical Infrastructure in Developing World — AI technologies can facilitate multilingual support in security applications. This capability allows for broader access …
S49
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S50
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S51
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S52
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S53
Hello from the CyberVerse: Maximizing the Benefits of Future Technologies — The timing of introducing frameworks, standards, and regulations is also deemed critical. If introduced too soon, regula…
S54
WS #162 Overregulation: Balance Policy and Innovation in Technology — It prompted discussion of specific examples where regulation enabled or catalyzed innovation, adding nuance to the debat…
S55
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — This perspective reframes regulation as potentially enabling innovation by providing predictability, building trust, and…
S56
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S57
Australia weighs risks and rewards of rapid AI adoption — AI is reshaping Australia’s labour market at a pace that has reignited anxiety aboutjob security and skills. Experts say…
S58
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — A critical concern addressed is workforce displacement, with approximately 27% of employment in occupations at highest r…
S59
RegHorizon 2nd AI Policy Conference — The wide application of AI technologies has enormous benefits, but it also presents unprecedented challenges in terms of…
S60
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Sachin Kakkar from Google illustrated the localisation challenge through the company’s IndIC GenBench initiative, which …
S61
Discussion Report: Sovereign AI in Defence and National Security — Faisal responds to concerns about competing global AI policies by arguing that the sovereign AI framework is adaptable t…
S62
Empowering Workers in the Age of AI — Development | Economic Speed of technological change vs. training capacity The rapid pace of technological change, par…
S63
Shaping the Future AI Strategies for Jobs and Economic Development — “what they sometimes upskill with may not be enough in two years time so I think this upskilling is going to be really a…
S64
The open-source gambit: How America plans to outpace AI rivals by democratising tech — Labour:AI-related job displacement is considered a significant risk. The plan calls for guidance on using state Rapid Re…
S65
Bottom-up AI and the right to be humanly imperfect | IGF 2023 — A particularly thought-provoking point in the discourse was the expression of concern regarding the rapid displacement o…
S66
AI, automation, and human dignity: Reimagining work beyond the paycheck — Current reskilling initiatives, while well-intentioned, rarely address these structural inequalities. They tend to be de…
S67
The Declaration for the Future of the Internet: Principles to Action — The conversation also inputs a compelling argument on the intricate equilibrium between regulation and innovation. Addre…
S68
Tackling disinformation in electoral context — While some regulation is necessary, over-regulation should be avoided as it could stifle innovation and growth in the di…
S70
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Public-private partnerships play a key role in these collaborations.
S71
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Multi-stakeholder partnerships between policy researchers and private sector are essential for surfacing potential harms…
S72
Open Forum #33 Building an International AI Cooperation Ecosystem — Multi-stakeholder framework bringing technical expertise, public interest, and global perspective is essential Public-p…
S73
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Sachin Kakkar from Google illustrated the localisation challenge through the company’s IndIC GenBench initiative, which …
S74
How to make AI governance fit for purpose? — International Cooperation and Standards Role of international cooperation and standards Singapore advocates against fr…
S75
Parliamentary Roundtable Safeguarding Democracy in the Digital Age Legislative Priorities and Policy Pathways — International Cooperation and Global Standards Need for international cooperation and global standards rather than frag…
S76
Artificial intelligence (AI) – UN Security Council — The discussion on the unintended consequences of rushed AI regulations was a central theme across multiple sessions duri…
S77
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S78
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Kristalina Georgieva- Brad Smith 38,000 GPUs available through public-private partnership as common compute facility….
S79
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — Development | Economic Microsoft Elevate represents the next chapter of corporate philanthropy, combining technology su…
S80
What policy levers can bridge the AI divide? — – Tatenda Annastacia Mavetera- Hubert Vargas Picado- Emmy Lou Versoza Delfin Development | Sociocultural Kone argues t…
S81
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — Furthermore, it highlights the significance of collaboration between the public and private sectors in future skills tra…
S82
WS #162 Overregulation: Balance Policy and Innovation in Technology — Amattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. H…
S83
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis argues for equalizing trust and safety investment. Market concentration is also opposed, with a call for a …
S84
Conversational AI in low income & resource settings | IGF 2023 — They also highlight the importance of regulations to provide guardrails and prevent potential misuse of AI. However, it …
S85
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurit…
S86
Ten cybersecurity predictions for 2026 from experts: How AI will reshape cyber risks — Evidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental u…
S87
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel outlines a three‑layer security approach: protect agents from malicious inputs, protect the world from rogue agent…
S88
AI Meets Cybersecurity Trust Governance & Global Security — “Let’s figure out what has to be done.”[88]”We need to be able to know a lot more about how we roll it out safely.”[89]”…
S89
Driving Social Good with AI_ Evaluation and Open Source at Scale — Benchmarking, Standardization, and Multilingual/Local Contexts
S90
Ateliers : rapports restitution et séance de clôture — Aurélien Macé Apparemment, j’ai droit à 6,6 minutes, deux fois plus que les autres, ce qu’on m’a dit. Le thème de vendre…
S91
#205 L&A Launch of the Global CyberPeace index — Sociocultural | Human rights | Development Wisniak highlights that AI systems perform poorly for languages and dialects…
S92
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S93
The Foundation of AI Democratizing Compute Data Infrastructure — So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free mone…
S94
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S95
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S96
AI Safety at the Global Level Insights from Digital Ministers Of — And I’m… I’m really gratified that the report continues to be anchored in that broader aperture of risk. And eventual…
S97
Who Watches the Watchers Building Trust in AI Governance — These technical limitations highlight why current benchmarks, while useful, remain inadequate for comprehensive safety a…
S98
WS #31 Cybersecurity in AI: balancing innovation and risks — Dr. Alison: Okay. Thank you. So I speak from a personal perspective here. So I don’t know if, realistically, I don’t…
S99
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — I think that would be super useful. We’re leading some work on testing, well, benchmarking and rate teaming, primarily m…
S100
Internet Governance at the Point of No Return — Besides that, standards of different natures can constitute a contribution for companies in the efforts to open up new m…
S101
Multi-stakeholder Discussion on issues about Generative AI — Hiroshima process will be one of the instruments to foster this collaboration
S102
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Sabhanaz Rashid Diya: Thank you, Alison. And good morning, everyone. I am Sabhanaz Rashid Diya, I’m with the Tech Global…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
L
Lee Tiedrich
4 arguments190 words per minute1193 words374 seconds
Argument 1
Need for global standards with cultural customization (Lee Tiedrich)
EXPLANATION
Lee argues that international AI standards are essential but must be adaptable to different languages, cultures, and norms. He stresses that while standards like ISO 42001 provide a starting point, they need to be accelerated and customized for local contexts.
EVIDENCE
He notes that ISO has released a standard (ISO 42001) and calls for faster development, while also highlighting the tension between cross-border applicability and the need for cultural and linguistic customization [38-41]. He references his experience at NIST working on a zero draft for ISO and mentions initiatives such as the Hiroshima AI process that bring together diverse regional stakeholders [42-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Accelerated international standards that must accommodate cultural and linguistic differences are highlighted in [S1], and Lee’s call for global collaboration on evaluation standards is echoed in [S10].
MAJOR DISCUSSION POINT
Global standards customization
AGREED WITH
Sachin Kakkar, Brad Staples
Argument 2
Evaluation‑first approach before imposing regulation (Lee Tiedrich)
EXPLANATION
Lee contends that technical evaluation frameworks should precede any regulatory action on AI. By establishing robust assessment methods, regulators can make informed decisions without stifling innovation.
EVIDENCE
He describes a 30-year career across government, academia, and private sector, emphasizing the need to develop evaluation techniques that set safety thresholds before debating regulation [89-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lee’s emphasis on developing technical evaluation frameworks prior to regulation is supported by the discussion of evaluation-first strategies in [S10] and the explicit mention of his stance in [S1].
MAJOR DISCUSSION POINT
Evaluation before regulation
AGREED WITH
Amit Chadha, Brad Staples
DISAGREED WITH
Amit Chadha
Argument 3
Emphasis on AI literacy and problem‑solving skills in education (Lee Tiedrich)
EXPLANATION
Lee stresses that AI literacy, critical thinking, and problem‑solving are core competencies needed for the workforce and everyday citizens. He advocates for public‑private partnerships to embed these skills in curricula and lifelong learning.
EVIDENCE
He calls for AI literacy to keep pace with rapid technology change, suggests teaching students how to think and solve problems, and highlights the importance of analytics and communication skills, noting his personal perspective as a parent of two young adults [387-393].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI-enabled education and curriculum reform is discussed in [S17], [S18] and [S19], providing context for Lee’s focus on AI literacy and problem-solving competencies.
MAJOR DISCUSSION POINT
AI literacy in education
AGREED WITH
Sachin Kakkar, Amit Chadha, Amanda Craig Deckard, Julian Waits
Argument 4
Global cooperation across academia, industry, and civil society is essential for benefits and risk mitigation (Lee Tiedrich)
EXPLANATION
Lee concludes that achieving AI safety and realizing its benefits requires coordinated action among governments, academia, industry, and civil society worldwide. He links this cooperation to broader goals such as the UN Sustainable Development Goals.
EVIDENCE
He references his work with OECD and global AI partnerships, noting progress in safety reports but urging continued collaboration and attention to benefits, not just risks, and mentions the need for data standardization and voluntary data-sharing foundations [283-285].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multi-stakeholder collaboration for AI governance is underscored in [S21] and reinforced by the global collaboration theme in [S10]; Lee’s work with OECD is noted in [S1].
MAJOR DISCUSSION POINT
Global multi‑stakeholder cooperation
S
Sachin Kakkar
4 arguments149 words per minute1152 words462 seconds
Argument 1
Risk of copying regulations without local adaptation (Sachin Kakkar)
EXPLANATION
Sachin warns that transplanting regulations or standards from one market to another often fails because local needs, languages, and constraints differ. He advocates for localized solutions that respect regional specifics.
EVIDENCE
He cites the challenge of copying regulations, the need to localize them, and gives Google’s Indiq GenBench as an example that supports 29 Indian languages, 12 scripts, and four language families, illustrating the importance of localization [50-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Warnings against transplanting regulations without adaptation appear in [S23] and the challenges of “same activity, same risk” adaptation are detailed in [S24]; Sachin’s localisation example is cited in [S1].
MAJOR DISCUSSION POINT
Localizing regulations
AGREED WITH
Lee Tiedrich, Brad Staples
Argument 2
Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar)
EXPLANATION
Sachin proposes moving from a traditional technology transfer model to a co‑creation approach where developers and governments collaborate on open‑source frameworks, capacity building, and upskilling. This model treats standards and regulations as enablers rather than barriers.
EVIDENCE
He outlines three dimensions: open-source frameworks (e.g., Safe SAIF, COSI coalition), capacity building (sharing threat intelligence, tools like SynthID), and workforce upskilling (digital literacy, grants to institutes) [61-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to open-source co-creation and capacity development is discussed in [S25] and government open-source policy in [S26]; the co-creation concept is also highlighted in [S1].
MAJOR DISCUSSION POINT
Co‑creation for AI development
AGREED WITH
Amit Chadha, Amanda Craig Deckard, Julian Waits, Lee Tiedrich
Argument 3
Emergence of AI agents vs AI defenders; self‑defending adaptive systems (Sachin Kakkar)
EXPLANATION
Sachin describes a future where AI agents can both attack and defend infrastructure, arguing that AI‑driven defenders can reverse the traditional defender’s dilemma. He envisions self‑defending adaptive systems that automate security tasks.
EVIDENCE
He explains that AI agents can scale attacks on critical infrastructure, but AI-powered defenders can automate 80 % of drudgery, giving defenders an aggregate advantage and enabling self-defending adaptive systems [263-273].
MAJOR DISCUSSION POINT
AI agents vs AI defenders
Argument 4
India shifting from back‑office to front‑office AI role, focusing on grassroots impact (Sachin Kakkar)
EXPLANATION
Sachin asserts that India has moved beyond being a low‑cost back‑office hub to becoming a front‑office AI innovator that addresses grassroots challenges such as agriculture, healthcare, and education. He emphasizes AI’s impact at the community level.
EVIDENCE
He notes that India now develops products for the world, cites the lack of data leaks during COVID-19, and describes India’s role as a front-office for AI rather than a cost base [311-321].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s transition to a front-office AI innovator is described in [S29] and [S30]; Sachin’s framing of this shift is reflected in [S1].
MAJOR DISCUSSION POINT
India’s evolving AI role
A
Amit Chadha
3 arguments164 words per minute1854 words675 seconds
Argument 1
Over‑regulation can stifle innovation (Amit Chadha)
EXPLANATION
Amit cautions that excessive regulation may hinder AI innovation and urges a balanced approach. He suggests careful calibration of regulatory scope to avoid choking technological progress.
EVIDENCE
He explicitly states that “too much of regulation can stifle innovation” and calls for careful consideration of how much regulation to apply [131-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between regulation and innovation is examined in [S31] and [S32], and Amit’s caution aligns with the broader discussion in [S1].
MAJOR DISCUSSION POINT
Regulation vs innovation
AGREED WITH
Lee Tiedrich, Brad Staples
DISAGREED WITH
Lee Tiedrich
Argument 2
Multilingual AI reduces prompt‑injection vulnerabilities; new jailbreak benchmark (Amit Chadha)
EXPLANATION
Amit explains that AI systems lacking robust multilingual capabilities are vulnerable to prompt‑injection attacks in low‑resource languages. He highlights a new multilingual jailbreak benchmark as a step toward mitigating this risk.
EVIDENCE
He describes how attackers can exploit poorly supported languages (e.g., Tamil) to bypass safety systems, and notes the development of a multilingual jailbreak benchmark by ML Commons that now includes Indic and Asian languages [250-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual jailbreak benchmarks and prompt-injection risks are detailed in [S33] and [S34]; the vulnerability of low-resource languages is noted in [S1].
MAJOR DISCUSSION POINT
Multilingual security
Argument 3
Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha)
EXPLANATION
Amit outlines his company’s strategy to address the AI skills gap by updating college curricula, integrating upskilling into billable work, and incentivizing innovation through patents and publications. This approach aims to keep the workforce future‑ready while maintaining productivity.
EVIDENCE
He details three actions: collaborating with colleges to refresh curricula [150-155], upskilling employees during billable time and tracking personal technology effort, leading to an increase in patent filings from 50 to 200 per year and a rise in employee-driven innovation from 19 % to 52 % [156-170].
MAJOR DISCUSSION POINT
Company‑driven upskilling
AGREED WITH
Sachin Kakkar, Amanda Craig Deckard, Julian Waits, Lee Tiedrich
A
Amanda Craig Deckard
2 arguments180 words per minute1537 words509 seconds
Argument 1
Microsoft Elevate program targeting 20 million Indians, educator training, multilingual AI (Amanda Craig Deckard)
EXPLANATION
Amanda describes Microsoft’s Elevate initiative, which aims to skilling millions of Indians through educator programs, cloud access, and multilingual AI tools. The goal is to close the AI skills gap at scale.
EVIDENCE
She cites a commitment to upskill 10 million Indians by 2030, already reaching 5.6 million, and a doubled target of 20 million by 2030, supported by the new Elevate for Educators program partnering with schools, vocational institutes, and higher-education institutions [122-126].
MAJOR DISCUSSION POINT
Microsoft Elevate scaling
Argument 2
Investment in infrastructure, connectivity, energy, and multilingual AI to close gaps (Amanda Craig Deckard)
EXPLANATION
Amanda outlines Microsoft’s broader investment strategy to bridge digital divides, focusing on foundational infrastructure such as connectivity, energy, AI compute, and multilingual capabilities. She emphasizes measuring diffusion to guide interventions.
EVIDENCE
She references a blog detailing five investment areas: hard infrastructure (connectivity, AI compute), scaling, multilingual AI, local AI deployment with community use cases, and diffusion measurement to assess impact [404-408].
MAJOR DISCUSSION POINT
Infrastructure and multilingual investment
B
Brad Staples
1 argument151 words per minute1335 words529 seconds
Argument 1
AI value concentration is not inevitable; intentional design and international collaboration required (Brad Staples)
EXPLANATION
Brad argues that the projected concentration of AI economic value in Western countries and China is not a foregone conclusion. He calls for intentional design, international collaboration, and inclusive innovation to democratize AI’s impact.
EVIDENCE
He cites estimates that 70 % of AI value could reside in Western economies, warns against accepting this outcome, and lists needed actions such as international collaboration, workforce development, private-sector partnerships, and trust, safety, and security measures [2-8].
MAJOR DISCUSSION POINT
Democratizing AI value
J
Julian Waits
3 arguments172 words per minute437 words152 seconds
Argument 1
Reliance on foreign talent and the need for continuous AI literacy (Julian Waits)
EXPLANATION
Julian points out that the U.S. tech sector depends heavily on foreign workers to stay competitive, and stresses the necessity of ongoing AI literacy to avoid falling behind. He links talent mobility to national AI capability.
EVIDENCE
He states that without foreign workers the U.S. would fall behind, emphasizing the reliance on overseas talent for AI development and the need for continuous learning [188-193].
MAJOR DISCUSSION POINT
Foreign talent dependence
Argument 2
AI can automate 60 % of security tasks, but human judgment remains essential (Julian Waits)
EXPLANATION
Julian notes that AI can handle the majority of routine security tasks, yet a significant portion still requires human expertise for risk assessment. This hybrid approach balances efficiency with necessary human oversight.
EVIDENCE
He explains that AI could eliminate 60 % of current security work, while the remaining 40 % needs human determination of risk for governments, defense, or businesses [366-368].
MAJOR DISCUSSION POINT
Human‑AI security partnership
DISAGREED WITH
Audience member
Argument 3
Rapid industry change demands continuous learning; optimism about AI solutions (Julian Waits)
EXPLANATION
Julian reflects on the fast pace of AI development, warning that skills become obsolete quickly and emphasizing the need for continual learning. He remains optimistic that AI itself can help solve the challenges it creates.
EVIDENCE
He remarks that the industry is moving so quickly that today’s important skills may disappear in five years, and stresses careful use of AI while expressing confidence that solutions will emerge [300-304].
MAJOR DISCUSSION POINT
Continuous learning and optimism
A
Audience
2 arguments157 words per minute519 words198 seconds
Argument 1
Concern about rapid technology outpacing upskilling efforts (Audience)
EXPLANATION
The audience member expresses worry that AI’s exponential speed outstrips upskilling programs, creating information arbitrage and polarizing effects. They highlight the risk of rapid displacement if standards and literacy do not keep pace.
EVIDENCE
The participant mentions the speed of AI, information arbitrage between pioneers and broader society, potential polarization of democracies, and fears of exponential displacement beyond the 40 % figure cited earlier [360-376].
MAJOR DISCUSSION POINT
Speed vs upskilling gap
DISAGREED WITH
Julian Waits, Audience member
Argument 2
Need for last‑mile connectivity and community‑level empowerment (Audience)
EXPLANATION
The audience member asks how to bridge the digital divide, emphasizing the importance of last‑mile internet connectivity and grassroots empowerment in rural India. They reference a local nonprofit’s work as an example of needed action.
EVIDENCE
She describes visiting the Digital Empowerment Foundation, which focuses on last-mile connectivity and community empowerment, and asks for concrete steps to address the divide [398-400].
MAJOR DISCUSSION POINT
Last‑mile connectivity
Agreements
Agreement Points
International AI standards must be adaptable to local languages, cultures and regulatory contexts
Speakers: Lee Tiedrich, Sachin Kakkar, Brad Staples
Need for global standards with cultural customization (Lee Tiedrich) Risk of copying regulations without local adaptation (Sachin Kakkar) Democratizing AI requires intentional design and international collaboration (Brad Staples)
All three speakers stress that while global standards are essential, they must be accelerated and customized for different languages, cultures and local market constraints to avoid ineffective copy-pasting of regulations [38-41][50-52][6-8].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for culturally sensitive standards is highlighted in discussions on inclusive AI for Africa and localisation initiatives, and the sovereign AI framework that can be tuned to national contexts [S49][S60][S61].
Regulation should be carefully calibrated and preceded by robust technical evaluation to avoid stifling innovation
Speakers: Lee Tiedrich, Amit Chadha, Brad Staples
Evaluation‑first approach before imposing regulation (Lee Tiedrich) Over‑regulation can stifle innovation (Amit Chadha) Question on trade‑off between global standards/regulation and innovation (Brad Staples)
Lee argues that evaluation frameworks must be established before debating regulation, Amit warns that excessive regulation harms innovation, and Brad explicitly asks about the trade-off, indicating shared concern for a balanced regulatory approach [89-95][131-133][79-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the timing of standards warn that premature regulation can hinder innovation, while other analyses show that well-balanced regulation can enable innovation when based on solid technical assessment [S53][S54][S55][S68].
Building AI capacity through widespread upskilling, lifelong learning and AI literacy is critical
Speakers: Sachin Kakkar, Amit Chadha, Amanda Craig Deckard, Julian Waits, Lee Tiedrich
Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar) Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha) Microsoft Elevate program targeting millions of Indians, educator training, multilingual AI (Amanda Craig Deckard) Continuous learning and AI literacy needed to keep pace with rapid change (Julian Waits) Emphasis on AI literacy and problem‑solving skills in education (Lee Tiedrich)
All speakers highlight the need for systematic skill development-from open-source co-creation and university curricula to corporate incentives and national programmes-to ensure the workforce can adapt to fast-moving AI technologies [61-78][156-170][119-126][364-368][387-393].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports identify large training gaps and call for continuous learning and targeted upskilling programs to mitigate displacement risks [S58][S63][S64][S66].
Public‑private partnership and multi‑stakeholder collaboration are essential for responsible AI deployment
Speakers: Brad Staples, Lee Tiedrich, Amanda Craig Deckard, Sachin Kakkar, Amit Chadha
Democratizing AI requires international collaboration (Brad Staples) Global cooperation across academia, industry, and civil society is essential (Lee Tiedrich) Public‑private partnership as a core pillar of Microsoft’s approach (Amanda Craig Deckard) Co‑creation model with governments and industry (Sachin Kakkar) Collaboration with government for capacity building and budgeting (Amit Chadha)
The panel repeatedly stresses that coordinated action among governments, industry, academia and civil society is needed to create standards, build capacity and ensure trustworthy AI [6-8][283-284][118-119][61-78][71-74].
Multilingual and multicultural AI capabilities are vital for inclusion, security and reducing vulnerabilities
Speakers: Sachin Kakkar, Amit Chadha, Amanda Craig Deckard, Lee Tiedrich
Risk of copying regulations without local adaptation; example of Indiq GenBench supporting 29 Indian languages (Sachin Kakkar) Multilingual AI reduces prompt‑injection vulnerabilities; new multilingual jailbreak benchmark (Amit Chadha) Investment in multilingual AI as part of Microsoft’s holistic approach (Amanda Craig Deckard) Need to customize standards for different languages and cultures (Lee Tiedrich)
All four speakers underline that supporting many languages and cultural contexts not only promotes equitable access but also mitigates security risks such as prompt-injection attacks, calling for dedicated benchmarks and tools [50-52][250-257][111-113][40-41].
POLICY CONTEXT (KNOWLEDGE BASE)
AI applications for security benefit from multilingual support, and localisation projects such as India’s GenBench illustrate the importance of cultural and linguistic diversity in AI systems [S48][S49][S60].
Similar Viewpoints
Both caution that premature or heavy regulation can hinder AI progress and advocate for technical evaluation as a prerequisite to policy decisions [89-95][131-133].
Speakers: Lee Tiedrich, Amit Chadha
Evaluation‑first approach before regulation (Lee Tiedrich) Over‑regulation can stifle innovation (Amit Chadha)
Both emphasize the importance of multilingual AI and localized solutions to ensure relevance and effectiveness across diverse linguistic communities [50-52][111-113].
Speakers: Sachin Kakkar, Amanda Craig Deckard
Risk of copying regulations without local adaptation (Sachin Kakkar) Microsoft Elevate program targeting multilingual AI capability (Amanda Craig Deckard)
Both argue that unchecked regulation or market concentration threatens equitable AI outcomes and that deliberate design and balanced policy are required [131-133][2-8].
Speakers: Amit Chadha, Brad Staples
Over‑regulation can stifle innovation (Amit Chadha) AI value concentration is not inevitable; needs intentional design and collaboration (Brad Staples)
Both stress that AI literacy, critical thinking and lifelong learning are essential to keep the workforce and citizens adaptable to rapid AI change [364-368][387-393].
Speakers: Julian Waits, Lee Tiedrich
Emphasis on AI literacy and continuous learning (Julian Waits) Emphasis on AI literacy and problem‑solving skills in education (Lee Tiedrich)
Both present concrete corporate strategies that blend open‑source collaboration with internal skill development to address the AI talent gap [61-78][156-170].
Speakers: Sachin Kakkar, Amit Chadha
Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar) Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha)
Unexpected Consensus
Rapid AI advancement outpacing upskilling efforts and causing potential displacement
Speakers: Audience, Julian Waits, Amit Chadha
Concern about rapid technology outpacing upskilling, information arbitrage and polarization (Audience) AI can automate 60 % of security tasks but 40 % still needs human judgment; displacement will occur (Julian Waits) Internal upskilling and patent incentives as a response to fast‑changing skill needs (Amit Chadha)
While the audience warned that AI’s exponential speed could outstrip training programs, both Julian and Amit acknowledged inevitable displacement and described proactive upskilling measures, revealing an unexpected alignment on the urgency of continuous learning [360-376][364-368][156-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses from Australia and global studies highlight that the speed of AI adoption exceeds workforce training capacity, raising concerns about job displacement [S57][S58][S62][S65].
Overall Assessment

The panel displayed a high degree of consensus around four core themes: (1) the need for globally coordinated AI standards that are culturally and linguistically adaptable; (2) a cautious, evaluation‑first approach to regulation to preserve innovation; (3) extensive public‑private collaboration coupled with robust capacity‑building programmes; and (4) multilingual AI as both an inclusion and security imperative. These shared positions suggest a collective willingness to pursue coordinated, inclusive and technically grounded AI governance frameworks.

Strong consensus across speakers, indicating that future policy and industry initiatives are likely to prioritize collaborative standard‑setting, balanced regulation, large‑scale upskilling and multilingual inclusivity, which together could mitigate concentration of AI value and enhance equitable AI diffusion.

Differences
Different Viewpoints
Timing and role of regulation versus innovation
Speakers: Amit Chadha, Lee Tiedrich
Over‑regulation can stifle innovation (Amit Chadha) Evaluation‑first approach before imposing regulation (Lee Tiedrich)
Amit warns that excessive regulation will choke AI innovation and calls for careful calibration of regulatory scope [131-133]. Lee argues that robust technical evaluation frameworks should be established first, and that regulators often cannot keep pace with rapid technology, suggesting regulation should follow evaluation rather than precede it [92-95]. The two speakers differ on when and how regulation should be applied.
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing debate about when to introduce regulations shows that both premature and delayed rules can hinder or help innovation, underscoring the need for balanced timing and scope [S53][S54][S55][S56][S67][S68].
Perceived speed of AI displacement and security automation
Speakers: Julian Waits, Audience member
AI can automate 60 % of security tasks, but human judgment remains essential (Julian Waits) Concern about rapid technology outpacing upskilling efforts (Audience)
Julian states that AI can eliminate about 60 % of current security work, leaving a remaining 40 % that still requires human judgment [366-368]. An audience participant counters that the exponential speed of AI could lead to far higher displacement-potentially 100 %-and that upskilling programs cannot keep pace, raising fears of polarization and rapid job loss [360-376]. This reflects a disagreement on the magnitude and immediacy of AI-driven displacement.
POLICY CONTEXT (KNOWLEDGE BASE)
Observations on rapid AI deployment in security contexts and its impact on employment illustrate concerns about the pace of displacement, echoed in security-focused AI discussions [S48][S56][S57].
Unexpected Differences
Speed of AI development versus upskilling capacity
Speakers: Audience member, Lee Tiedrich, Julian Waits
Concern about rapid technology outpacing upskilling efforts (Audience) Emphasis on AI literacy and lifelong learning as a remedy (Lee Tiedrich) Optimistic view that AI itself will help solve the displacement problem (Julian Waits)
The audience raised alarm that AI’s exponential pace could outstrip education and upskilling programs, potentially leading to massive displacement [360-376]. Lee responded by stressing AI literacy, problem-solving skills and public-private partnerships to keep the workforce adaptable [387-393]. Julian, however, expressed confidence that AI will provide solutions despite the rapid change [300-304]. The stark contrast between the audience’s urgency, Lee’s educational remedy, and Julian’s optimism was not anticipated earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses point to a gap between fast AI progress and slower upskilling pipelines, calling for accelerated lifelong learning initiatives to keep pace [S62][S63][S64].
Overall Assessment

The panel displayed broad consensus on the importance of democratizing AI, building capacity and fostering public‑private collaboration. Disagreements centered on the timing and nature of regulation, the perceived immediacy of AI‑driven job displacement, and the preferred mechanisms for addressing the skills gap. While most participants agreed on the goals of inclusive AI development and security, they diverged on policy sequencing and implementation tactics.

Moderate – the disagreements are substantive but do not fracture the overall consensus. They highlight the need for coordinated policy design that balances innovation, regulation, and rapid upskilling, especially for developing regions.

Partial Agreements
All three panelists agree that the AI skills gap must be closed, but propose different pathways: Amanda emphasizes large‑scale public‑private skilling programs and educator partnerships; Sachin advocates a co‑creation approach with open‑source tools, capacity‑building grants and digital‑literacy initiatives; Amit focuses on aligning college curricula, integrating upskilling into billable work and incentivising innovation through patents and research time. The shared goal is workforce readiness, while the means diverge.
Speakers: Amanda Craig Deckard, Sachin Kakkar, Amit Chadha
Microsoft Elevate program targeting 20 million Indians, educator training, multilingual AI (Amanda Craig Deckard) Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar) Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha)
Both speakers concur that AI standards and regulations cannot be a one‑size‑fits‑all. Lee calls for accelerated international standards that are customizable to languages, cultures and norms [38-41], while Sachin warns against transplanting regulations wholesale and highlights the need for localized test‑beds such as Indiq GenBench [50-52]. They differ on the mechanism: Lee focuses on adapting global standards, Sachin on building local solutions.
Speakers: Lee Tiedrich, Sachin Kakkar
Need for global standards with cultural customization (Lee Tiedrich) Risk of copying regulations without local adaptation (Sachin Kakkar)
Takeaways
Key takeaways
AI’s economic value is currently concentrated in Western economies and China, but this outcome is not inevitable; intentional design and international collaboration are needed to democratize AI benefits. Global AI standards are essential, but they must allow cultural, linguistic, and regulatory customization for different regions. Co‑creation between governments and developers—through open‑source frameworks, capacity‑building, and workforce upskilling—is more effective than a simple transfer of regulations. Over‑regulation can stifle innovation; an evaluation‑first, evidence‑based approach should precede regulatory mandates. Public‑private partnerships are critical for closing the AI skills gap; programs such as Microsoft Elevate, internal upskilling, curriculum updates, and patent incentives are being deployed. AI‑specific security risks (e.g., prompt‑injection, jailbreaks) require multilingual robustness and the development of AI‑defender agents; continuous scanning and adaptive defenses are necessary. Bridging the digital divide requires investment in basic infrastructure (connectivity, energy), multilingual AI, local use‑case development, and systematic measurement of AI diffusion. India is transitioning from a back‑office to a front‑office role in AI, focusing on grassroots impact and local innovation rather than merely cost‑center services.
Resolutions and action items
Expand and localize the ISO/ISO‑42,001 standard and related drafts (e.g., NIST zero draft) to incorporate cultural and linguistic variations. Google to extend the Indiq GenBench benchmark to additional Indic and low‑resource languages and to contribute to the multilingual jailbreak benchmark with ML Commons. Microsoft to scale the Elevate program to 20 million Indians by 2030, including the new Elevate for Educators initiative and continued cloud/AI access for schools and vocational institutes. L&T Technology Services to continue internal upskilling through billable‑time training, curriculum alignment with industry needs, and incentive structures (patent and publication recognition). Develop AI‑defender agents and self‑adapting security stacks (as described by Sachin Kakkar) to counter AI‑driven attacks. Implement continuous auditing mechanisms for AI systems rather than one‑time certifications, as advocated by Sachin Kakkar. Establish voluntary data‑exchange frameworks and standardized data licensing (e.g., Creative‑Commons‑style for data) to reduce friction in cross‑border collaborations.
Unresolved issues
How to ensure rapid upskilling keeps pace with the exponential speed of AI advances, especially in developing economies. The precise balance between global regulatory frameworks and local adaptation without creating compliance burdens for startups. Effective mechanisms for last‑mile connectivity and digital empowerment in rural areas beyond high‑level investment commitments. Quantitative metrics for measuring AI diffusion and the impact of public‑private interventions over time. Long‑term economic displacement effects of AI automation and the extent to which AI can replace versus augment human security analysts.
Suggested compromises
Adopt a “creative tension” approach: start with global standards and regulations, then adapt them to local constraints (e.g., bandwidth, linguistic diversity). Use a mixed “carrot‑and‑stick” strategy for workforce upskilling—combine incentives (patents, recognition, budgets) with clear productivity expectations. Balance regulation with innovation by prioritizing evaluation‑first technical frameworks before imposing mandatory rules.
Thought Provoking Comments
There is a tension. On the one hand we want standards to apply across borders so companies can have responsible technology flow, but on the other hand we need to customize them for different cultures, languages, and norms.
Highlights the fundamental dilemma of creating universal AI standards while respecting cultural diversity, pushing the conversation beyond technical specifications to sociopolitical considerations.
Shifted the discussion toward the need for flexible, locally‑adaptable standards and prompted Sachin and others to talk about localization of regulations and tools, deepening the debate on how global frameworks can be made inclusive.
Speaker: Lee Tiedrich
Copy‑pasting regulations from international markets to local markets may not work. We need continuous scanning and auditing to avoid temporal drift as AI evolves.
Introduces the concept that static, one‑time compliance checks are insufficient for rapidly evolving AI systems, adding a dynamic, lifecycle‑focused perspective to governance.
Led to a follow‑up on the trade‑off between global standards and local adaptation, and set up later remarks about AI agents versus AI defenders, expanding the conversation to ongoing security monitoring.
Speaker: Sachin Kakkar
We track how much personal time employees spend on technology development beyond billable hours; that rose from 19 % to 52 %, and our patents per year jumped from 50 to 200. We reward patents, papers, and talks as personal achievements.
Provides a concrete, innovative model for aligning employee incentives with AI upskilling and innovation, showing how a company can turn upskilling into a productivity driver rather than a cost.
Introduced a new dimension to the skills‑gap discussion, prompting other panelists (e.g., Amanda) to compare corporate‑wide programs with individual incentive structures, and highlighted practical ways to embed AI learning into daily work.
Speaker: Amit Chadha
Microsoft Elevate aims to upskill 20 million Indians by 2030, with programs for teachers, vocational institutes, and higher‑education partners, and we measure diffusion to inform interventions.
Shows a large‑scale, data‑driven public‑private initiative that combines infrastructure, multilingual AI, and continuous measurement, illustrating a holistic strategy to bridge the digital divide.
Steered the conversation toward measurable impact and the importance of tracking adoption, influencing later audience questions about speed of deployment and prompting Lee to stress data‑standardization.
Speaker: Amanda Craig Deckard
If we didn’t have foreign workers in the U.S., we would fall behind the rest of the world. We are forced to use labor in other societies that appreciate STEM technology.
Points out the geopolitical dependency on talent from developing countries, framing the skills gap as not just a corporate issue but a national competitiveness concern.
Triggered a broader reflection on global talent flows, leading Brad to ask about carrot vs stick incentives and prompting audience concerns about rapid displacement and equity.
Speaker: Julian Waits
AI literacy is essential. We must teach students how to think, problem‑solve, and communicate so they can adapt as technology changes, not just hand them a fixed skill set.
Shifts the focus from technical upskilling to foundational education that equips people to navigate future AI disruptions, emphasizing long‑term resilience.
Answered the audience’s worry about exponential AI progress, reframed the skills‑gap debate toward education reform, and reinforced the call for public‑private partnerships in curriculum development.
Speaker: Lee Tiedrich
We need voluntary foundations and standardized data agreements (like Creative Commons for data) to enable easy, low‑friction data exchange across regions.
Identifies a practical bottleneck—data sharing—that underpins many of the earlier points about localization, standards, and AI benefits, proposing a concrete solution.
Closed the panel by linking earlier themes (standards, localization, trust) to a tangible action item, prompting Amanda to reference Microsoft’s measurement of diffusion and reinforcing the need for collaborative infrastructure.
Speaker: Lee Tiedrich
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from abstract concerns about AI concentration to concrete, actionable strategies. Lee Tiedrich’s articulation of the standards‑cultural tension set the stage for debates on localization and continuous governance, which Sachin expanded with the idea of ongoing audits. Amit Chadha’s insider view of incentive‑based upskilling and Amanda Craig’s large‑scale Elevate program offered contrasting but complementary models for closing the skills gap. Julian Waits highlighted the geopolitical reliance on talent, prompting a deeper look at equity and displacement, while Lee’s later emphasis on AI literacy reframed the problem as one of education rather than mere training. Together, these comments created turning points that broadened the scope, introduced new dimensions (data sharing, measurement, talent flows), and steered the panel toward a consensus that collaborative, adaptable, and measurable approaches are essential for democratizing AI benefits.

Follow-up Questions
What are the gaps between upscaling AI capabilities and the real economic displacement that need to be addressed in the transition process?
Understanding these gaps is crucial to design policies and reskilling programs that mitigate job loss and ensure a smooth economic shift as AI diffuses.
Speaker: Audience member (unidentified)
How can the digital divide be bridged to make AI access more equitable, especially in rural and underserved regions?
Identifying concrete strategies for last‑mile connectivity, affordable infrastructure, and inclusive AI literacy is essential to prevent exclusion and associated societal risks.
Speaker: Rita Soni (Audience)
How can continuous scanning and auditing of AI systems be implemented to avoid temporal drift, rather than relying on one‑time audits?
AI models evolve rapidly; ongoing monitoring is needed to ensure compliance with standards and maintain safety over time.
Speaker: Sachin Kakkar
What mechanisms can be created for data standardization and voluntary data‑sharing agreements to reduce friction in cross‑border AI collaboration?
Standardized data formats and clear licensing (e.g., Creative Commons‑like for data) would facilitate international cooperation and enable localized AI solutions.
Speaker: Lee Tiedrich
What approaches are needed to improve AI literacy and embed adaptable problem‑solving skills across the workforce and education systems?
Broad AI literacy, supported by public‑private partnerships, is key to enable individuals to keep pace with fast‑moving AI technologies.
Speaker: Lee Tiedrich
How can self‑defending, AI‑powered security agents be developed to create an ‘AI‑versus‑AI’ defense against emerging cyber threats?
Research into autonomous defensive agents could shift the defender’s dilemma, providing scalable protection against AI‑driven attacks on critical infrastructure.
Speaker: Sachin Kakkar
What are the implications of rapid, exponential AI advancement on information arbitrage, power polarization, and democratic stability?
Investigating how speed and unequal access to AI knowledge may exacerbate societal divides is vital for policy and governance frameworks.
Speaker: Audience member (unidentified)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.