Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Amanraj Khanna, examined how India’s recent policy announcements and large-scale investments are shaping an “AI stack” that links government vision with enterprise needs [7-14][15-18]. Khanna highlighted $20 billion pledged for India by Microsoft, $15 billion from Google and partnerships such as Anthropic-Infosys-Tata as evidence of growing momentum, but stressed that translating announcements into deployment at scale remains the key challenge [11-16][17-21].


CDAC’s executive director Vivek Kanneja explained that the National Supercomputing Mission has deployed about 48 petaflops of PARAM machines, to be expanded to roughly 100 petaflops across 60 sites by year-end, primarily serving researchers and some MSMEs via the Paramuthkarsh hub in Bangalore [45-53][54-60]. These systems support workloads such as drug discovery, weather prediction, oil exploration and computational fluid dynamics, and CDAC also provides hands-on assistance to government agencies and start-ups [57-62].


Intel’s Nitin Bajaj described how Indian enterprises are still stuck in pilot phases because they must decide between on-prem, cloud, or edge deployments, evaluate ROI, and choose appropriate models, leading to a “no single formula” situation [70-78][79-84]. He noted that use-case selection (e.g., smart manufacturing, retail analytics, document search) is often driven by cost-performance trade-offs, and Intel promotes “frugal AI” solutions that can run sizable models on CPUs to reduce reliance on GPUs [85-88][158-166]. Nitin added that data-sovereignty requirements vary by sector-critical for banking and healthcare but less decisive for retail-so enterprises balance regulatory mandates against performance and cost considerations [147-156], and he called for benchmarks such as energy-per-token to guide model optimisation and ensure greener AI at scale [200-203].


When asked about AI sovereignty, Kanneja argued that full independence from global silicon is unrealistic; instead India should secure critical choke points above the chip layer, using foreign GPUs while retaining control over models, software and applications, and he mentioned a planned RISC-V-based GPGPU for 2029-30 [115-124][125-138]. Both speakers agreed that talent gaps hinder large-scale deployment: CDAC sees bright graduates lacking practical MLOps experience, while Bajaj points to India’s youthful demographic as a potential accelerator once training catches up [173-181][184-185]. Energy efficiency was raised as a concern; CDAC employs liquid-cooling and power-aware design to achieve PUE around 1.2, whereas Intel reports data-center PUE of 1.06 and emphasizes efficient processor architectures for edge and cloud workloads [188-199][207-212]. Looking ahead, Kanneja envisions success as AI being embedded in everyday workflows, simplifying life, while Bajaj envisions mass-scale AI adoption that elevates even small vendors and improves public intelligence through widespread data use [223][225-231]. The discussion concluded that coordinated progress in infrastructure, pragmatic sovereignty, talent development and sustainable energy will determine whether India can move from ambitious pilots to pervasive, cost-effective AI deployments [21][95-102][188-205].


Keypoints


Major discussion points


India’s AI ambition and the need to move from policy announcements to real-world scale.


The moderator frames the session around “massive policy announcements” and huge private-sector investments, then asks what it takes to “translate that vision into adoption and scale” [7-14][20-22].


CDAC’s supercomputing stack (PARAM) – current capacity, user base and workload types.


CDAC operates about 48 petaflops of PARAM machines, soon to reach ≈ 100 petaflops with 60 installations, serving ~15,000 researchers, MSMEs and start-ups for applications such as drug discovery, weather prediction, CFD, etc. [45-53][54-61][62-63].


Enterprise hurdles in scaling AI from pilots to production.


Nitin highlights the “speed-ROI” dilemma, choices between on-prem, cloud, edge, data-cleanliness, and rapidly evolving models; both speakers note that successful pilots often stall because of data-quality and MLOps gaps [70-84][95-102].


AI sovereignty versus reliance on global technology.


Vive Kanneja explains that full end-to-end independence (silicon to application) is not yet feasible; a pragmatic model uses imported GPUs while keeping the stack (models, orchestration, applications) under Indian control, and mentions a home-grown RISC-V GPGPU planned for 2029-30 [115-138].


Talent and sustainability as cross-cutting constraints.


Both panelists stress a shortage of engineers who can move from theory to production-grade MLOps (curriculum gaps, need for capstone projects) [173-182]; they also discuss energy efficiency measures-liquid cooling, low PUE designs (≈ 1.2 for CDAC, 1.06 for Intel data centres) and the importance of power-aware model design [188-205][207-214].


Overall purpose / goal of the discussion


The panel was convened to provide a candid, dual-track examination of India’s AI ecosystem: one track examining the government-led R&D and supercomputing infrastructure, the other examining the practical needs and obstacles of large Indian enterprises. The aim was to identify where these tracks intersect, where gaps remain, and what concrete steps are needed for India’s AI vision to become a scalable reality.


Overall tone and its evolution


Opening: Energetic and optimistic, celebrating recent policy wins and investment announcements. [5-11]


Mid-session: Shifts to a pragmatic, problem-solving tone as panelists detail real-world constraints (capacity limits, pilot-to-production bottlenecks, sovereignty trade-offs). [45-84][115-138]


Later: Becomes reflective and solution-oriented, acknowledging talent and energy challenges while offering concrete mitigation strategies (curriculum reform, frugal AI, efficient cooling). [173-182][188-214]


Closing: Concludes on a hopeful, forward-looking note, summarising a vision of widespread AI deployment across workflows and society. [223-230]


The conversation thus moves from high-level enthusiasm to grounded analysis and ends with an aspirational yet realistic outlook.


Speakers

Amanraj Khanna


Role/Title: Partner and Managing Director for India at the Asia Group; Moderator of the panel.


Area of Expertise: Technology policy, AI ecosystem strategy, bridging government vision with enterprise needs.


Source: [S5], [S7]


Vivek Kanneja


Role/Title: Executive Director, Center for Development of Advanced Computing (CDAC).


Area of Expertise: National supercomputing infrastructure, high-performance computing (HPC), AI compute platforms, research and training in advanced computing.


Source: [S1], [S2]


Nitin Bajaj


Role/Title: Director of Sales for Conglomerate Accounts, Intel India.


Area of Expertise: Enterprise AI adoption, sales and technology leadership for large Indian enterprises, solution architecture spanning cloud, edge, and on-prem AI workloads.


Source: [S3], [S4]


Additional speakers:


Sangeeta Reddy


Role/Title: Joint Managing Director, Apollo Hospitals.


Area of Expertise: Healthcare leadership and digital transformation (invited to give remarks, not a panel participant).


Source: (mentioned in transcript)


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Amanraj Khanna framing India’s AI agenda as a “stack” that must link government vision with the practical needs of enterprises. He noted that the energy here is still so palpable, even after five days[1-4] and highlighted “Pax Silica announced this morning” as a fresh policy signal [5-7]. Khanna then pointed to a series of high-profile commitments – Microsoft’s $20 billion pledge, Google’s $15 billion investment, and the Anthropic-Infosys-Tata partnership – as evidence of a “truly fascinating moment” for the country’s AI ambitions [10-14].


Khanna introduced the two panelists. To his immediate left was Vivek Kanneja, executive director of the Centre for Development of Advanced Computing (CDAC), which runs the PARAM supercomputing series for government missions and national research [27-31]. To his far left was Nitin Bajaj, director of sales for conglomerate accounts at Intel India, overseeing the company’s engagement with the nation’s largest enterprises on digital transformation and AI adoption [32-38].


CDAC’s role and compute foundation


Kanneja explained that CDAC, a scientific society under the Ministry of Electronics and Information Technology, is tasked with building super-computing capacity through the National Supercomputing Mission. Since the late 1980s CDAC has evolved from the PARAM 8000 to a family of machines that together deliver roughly 48 PFLOPS, a figure slated to rise to about 100 PFLOPS across 60 installations by year-end [45-53]. These resources are accessed via the National Knowledge Network (NKN) and are currently used by around 15 000 researchers[55-56]. Typical workloads include drug discovery, bio-informatics, protein folding, molecular modelling, weather forecasting, oil exploration, finite-element analysis and computational fluid dynamics[57-61].


Enterprise adoption challenges


Bajaj described how Indian corporations are still largely stuck in pilot projects. Organisations first grapple with identifying high-impact use-cases – smart manufacturing, smart retail and document-search – and then face a “speed-ROI” dilemma that forces choices between on-prem, cloud or edge deployments, between open APIs or bespoke models, and between cost-effective production-scale roll-out [70-84]. He argued that there is “no single formula” because the AI ecosystem – from silicon to operating systems – evolves at “light-ning speed”, leaving enterprises uncertain about the optimal deployment model [81-84]. The lack of a clear ROI calculation, combined with rapidly changing model offerings, often leaves pilots in limbo [85-88].


Both panelists agreed that the transition from proof-of-concept (POC) to production is a major choke point. Kanneja noted that POCs succeed on curated data sets but falter when confronted with real-world data that is noisy, incomplete or un-labelled, and when organisations lack mature MLOps capabilities [95-102]. Bajaj echoed this, adding that the “speed-ROI” dilemma and the difficulty of choosing an appropriate deployment architecture further impede scaling [70-84]. This convergence underscores that data quality, operational expertise and cost-benefit analysis are the primary barriers to large-scale AI deployment [95-102][70-84].


AI sovereignty


When asked about AI sovereignty, Kanneja gave a nuanced answer. He argued that full independence from the global silicon supply chain is not feasible today because India does not possess the IP or a fab that can produce sub-3 nm chips[115-124]. Instead, a pragmatic model is to import GPUs (from NVIDIA, Intel, AMD, etc.) while retaining control over the GPU farm, model development, orchestration and applications, thereby securing the critical “choke points” of the stack [127-133]. CDAC is also pursuing a home-grown RISC-V-based GPGPU, expected around 2029-30[136-138].


Bajaj added that data-sovereignty requirements differ by sector. For banking and healthcare localisation is essential, whereas manufacturing and retail often prefer cloud-based solutions for speed and performance [147-152]. In operational-technology settings firms may keep data on-premise or at the edge for security, but cost considerations dominate when scaling from a cloud-trained model to on-prem deployment [153-156]. He highlighted Intel’s “frugal AI” strategy, which leverages CPUs that integrate GPU, NPU and CPU cores to run 7-8 billion-parameter models at the edge, and Xeon processors that can handle up to 80 billion-parameter models in data centres, questioning whether a dedicated GPU is always necessary [158-166].


Talent gap


Kanneja lamented that while Indian engineering graduates possess strong theoretical foundations, curricula rarely cover practical MLOps, data-cleaning, real-time constraints or security considerations; he called for capstone projects that expose students to beta-scale data[173-182]. Bajaj, by contrast, pointed to India’s youthful demographic – an average age of 13-25 – as a natural accelerator that will close the skill gap within a few years, noting his own learning from younger colleagues [184-185]. Both perspectives were clearly attributed.


Energy and sustainability


Kanneja described CDAC’s power-aware design practices – power islands, clock-gating and a shift from water-cooling to liquid-cooling (approximately 70 % liquid, 30 % water) – achieving a Power Usage Effectiveness (PUE) of roughly 1.2, compared with 1.4 for conventional systems [190-194]. He advocated benchmarking “energy-per-token” for training and inference and exploring model compression to reduce power draw [200-204]. Bajaj complemented this by noting Intel’s own data-centre PUE of 1.06, the use of ribbon-fed “power-via” technology for a 15 % efficiency gain, and the importance of judicious model selection to curb power demand [207-210][215-217].


Points of agreement and divergence


Both speakers agreed that (i) POC-to-production bottlenecks stem from data quality, MLOps maturity and ROI uncertainty; (ii) energy efficiency is critical, with both CDAC and Intel demonstrating low-PUE designs; (iii) success will be measured by mass-scale AI integration that improves everyday life and empowers even informal-sector actors; and (iv) a talent gap exists but can be mitigated through curriculum reform and the country’s demographic dividend [95-102][70-84][190-194][207-210][173-185].


Disagreements emerged around the root cause of the talent gap (curriculum reform versus demographic momentum) and the primary obstacle to scaling AI (data-quality/MLOps versus deployment-model/ROI decisions) [173-185][70-84]. Their visions of success also differed: Kanneja summed it up as “AI being actually deployed in a lot of workflows and making life much simpler”[223], while Bajaj envisaged a future where India becomes the world’s leading consumer of data, enabling even a “Sabziwala” to up-skill through Indic language models and mass-scale AI deployment[225-231].


Open questions raised


The panel highlighted several open questions, including how to develop quantitative ROI models, how enterprises should choose between on-prem, cloud or edge deployments, how to standardise energy-efficiency benchmarks such as “energy-per-token”, how to achieve fuller data-sovereignty while still relying on imported silicon, and what scalable pathways can bridge the talent gap beyond curriculum changes [70-84][95-102][115-124][147-156][173-185][190-194][207-210].


In closing, both panelists expressed optimism. Kanneja envisaged a near-future where AI is woven into everyday workflows, simplifying life for citizens [223]; Bajaj projected that within two to three years India will lead in data usage, with AI-enabled tools uplifting even small vendors and supporting a broad ecosystem of Indic models [225-231]. The discussion underscored that realising India’s AI ambitions will require coordinated progress across infrastructure, pragmatic sovereignty, talent development and sustainable energy, turning policy announcements into tangible, cost-effective AI deployments at scale.


Session transcriptComplete transcript of the session
Amanraj Khanna

India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managing director for India at the Asia Group. Also my privilege to be a moderator here today. I have to say the energy here is still so palpable, even after five days of this. So it’s absolutely brilliant to be here with you all. We’ve had a truly fascinating moment in India’s AI ambitions. Some massive policy announcements. We just had Pax Silica announced this morning. Very exciting indeed. Significant infrastructure investments. Brad Smith, if you heard him yesterday, Microsoft announced $50 billion in the global south alone. After $20 billion, that has been announced for India. Google, as you know, $15 billion. And growing enterprise adoption.

Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all watching this. But announcement is one thing. Deployment and then achieving scale quite enough. So that’s why this panel matters. Today’s conversation brings two critical perspectives to this fundamental question. What does it take to translate that vision into adoption and scale? One of my two distinguished speakers brings unparalleled insight into infrastructure being built through national R &D institutions. The other sees the reality of what India’s largest enterprises actually need when they deploy AI. So we are here to have an honest conversation about these two tracks. Where do they connect and perhaps where they don’t. So with that let me introduce my distinguished panelists. First I have to my immediate left Mr.

Vivek Kanneja. Vivek is the executive director of the Center for Development of Advanced Computing, CDAC as it’s known. And CDAC has built the Parham supercomputing series providing AI compute infrastructure for government departments and national missions. It conducts cutting -edge research into high -performance computing and cybersecurity. and also trains thousands of engineers annually in advanced computing and AI. Vivek, of course, has held multiple senior leadership positions within CDAC and has guided national initiatives in these critical areas. Welcome, Vivek. I also have to my far left, Nitin Bajaj. Nitin is Director of Sales for Conglomerate Accounts at Intel India. He leads Intel’s engagement with India’s largest enterprises on their digital transformation and AI adoption journeys. He has over 28 years of global experience in sales and technology leadership and orchestrates a broad partner ecosystem.

These include system integrators, ISVs, cloud providers to deliver Intel -based solutions spanning cloud, AI, HPC, 5G, edge, and end -user computing. So in summary, he sees firsthand what drives enterprise infrastructure decisions and what actually prevents companies from moving AI from pilot to true scale. So with those introductions, let’s get right into it. Vivek, why don’t I start with you? So here’s my first question Vivek. CDAC has built param supercomputing series and AI compute infrastructure. What compute capabilities does CDAC actually provide today? Who uses them? For what workloads? And what are the key constraints that you see operating within?

Vivek Kanneja

Okay, thanks. So as you know, CDAC is a scientific society under the Ministry of Electronics and IT. And one of the mandates that we have is to build supercomputing capacity in the country. This is the mandate which is given to us under the National Supercomputing Mission where we have developed a series of supercomputers under the brand name PARAM. We started in late 80s, starting with the PARAM 8000. And now we have the PARAM series of supercomputers which are installed with our own software. About 48 petaflops of supercomputers are installed in the country. And we have been able to build some of these supercomputers And we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers of overall performance connected over the National Knowledge Network or the NKN as it is called.

This capacity is going to be augmented to about 100 pitaflops by the end of this year with 60 installations. Most of these installations are today being used by researchers. About 15 ,000 researchers fire jobs across these machines on the National NKN. A lot of it is being also used by MSMEs. For example, we have opened Paramuthkarsh which is housed in our city at Bangalore Centre for use by start -ups and MSMEs. The kind of applications that run here include drug discovery, bioinformatics, we have protein folding. We have molecular modeling. You have weather prediction. So I mean almost all of these are being used. number crunching problems, oil exploration, finite element modeling, computational fluid dynamic problems. So all such problems are being run across these clusters by researchers.

We also have a lot of expertise in various domains which we have developed in -house over a period of years where we are hand -holding a lot of these agencies, a lot of government agencies are working with us on this.

Amanraj Khanna

Thanks so much, Vivek. Lots of threads to pull on there, but for a moment, let me go to Nitin. Nitin, you work with some of India’s largest enterprises, some of our national champions. So when these enterprises commit to AI, which you must have seen increasingly so, what are some of the actual barriers that prevent them to move from pilot projects to some of those production -scale deployments that everyone envisions, especially at events like this?

Nitin Bajaj

Thank you. First of all, thank you for inviting me. It’s a privilege to be here and talking in front of such an esteemed crowd. So basically, I think I’ll kind of break this into two, three pieces. One, be it Indian enterprises or be it global enterprises, I think everybody is grappling with the same sort of problem statement. And everybody is trying to find those use cases which are very, very pertinent for their own enterprises. And some certain enterprises are into manufacturing domain, some are there in the retail side of it. And so the typical use cases that they have could be around smart manufacturing, smart retail, or specific generic use cases where they want to do a lot of document search.

They have to have fine -tuned search on the policies, the TNCs and things of that sort. So essentially, the way you see it, I see it in twofold. One, They are looking at the speed, and everybody is trying to figure out the best ROI. So I think the biggest gap today is what to use, whether to use it on -prem or whether to go on cloud, use open APIs which are available to them. Then once those use cases are ready, as you said, from pilot to production, what is the final cost of that deployment? And then the third angle that comes in is whether to kind of centralize all of this or to take it to the edge.

So there is no single answer. And when you think of all the ecosystem providers that are there today, be it from silicon to ISVs to system integrators, everybody has a pocket of expertise in their own sense. But today there is no single formula. And the entire. AI journey is changing so rapidly models are being dropped at a speed of light and then the entire from silicon to OS to everything else, the whole ecosystem is changing so fast that even the enterprises are trying to figure out what is the best deployment model for them, what is the best ROI that they can get out of it but in the midst of all of this, in pockets, lot of these enterprises are trying to see what are the specific use cases that they can bring to fore that can bring some incremental benefit to whatever operations that they are running in their organization.

Now I can give multiple examples to it for example in a manufacturing domain, there could be surveillance kind of use cases multi -modal use cases, there could be use cases around how to look at inventory and then how to look at complete digital twin dark factory kind of a scenario in case of retail it is all about say it could be around preventing the thefts, the pilferation that happens or doing customer analytics. And then, as I said, a lot of document search kind of examples are going on. But in most of those cases, the whole decision -making is between edge versus on -prem, cloud versus sovereign data centers, what kind of models they should use, and how to find out the right ROI, which is where we feel frugal AI is what we kind of propose to the industry.

And I’ll talk more about that maybe later. But what is the best deployment model that can really help an enterprise scale at a level, at a cost point, which is really making sense to them?

Amanraj Khanna

Let me ask you a very quick follow -on question on that. Do you see that Indian enterprises are increasingly sophisticated in making these choices? And, of course, Intel works globally, right? So how does it compare in terms of its maturity, its sophistication, compared to the other markets where Intel also operates?

Nitin Bajaj

Clearly, in terms of use cases, I think we have the edge. But again, the veracity of data. it’s a big problem second i would say that these enterprises when you think of it a year back almost everybody wanted to buy gpus and a lot of these enterprises have bought systems which are powerful enough to run all kinds of use cases but still they are in that pilot phase because again because of that roi factor so things are maturing of course enterprises are becoming smarter from llms we are now looking at slms so they are trying to kind of figure out the right silicon where they have to kind of land their workloads on so as things are emerging i think things are getting better but yeah it’s still some time before you see those live deployments coming out

Vivek Kanneja

for the benefit of large larger audience sorry for interjecting i think just just to add to add to what he just said we are seeing a lot of places where people are not being able to come out of the POCs. I think one of the major reasons that, at least personally I have seen, is that people are very happy with the POCs. They can train them on curated data sets. But once it actually goes and hits real -life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.

And as you talked about the ROI, then you have to make those choices, whether I should have an on -prem this thing, do I really need a GPU for my problem? Or can I work across multiple VMs on a simple, simple IT infrastructure? So those are the choices that hopefully in the next coming… years, intelligent choices will be made. People, we will start to see more MLOps, engineers coming out and deploying these things at scale because POCs is fine, but the real revenue will come only once you actually deploy it at scale.

Amanraj Khanna

Understood. Thank you so much for those perspectives. Unlike yourselves, you know, you’re both technologists. I’m a little bit in the policy space. I’m in the fringes of this. I work on tech policy, and I haven’t had a single conversation here with, you know, a foreign investor which hasn’t talked about sovereignty or dependency. So I’m going to bring my next question. I’ll bring you over to my area because I’d love to pick both your brains on this. So Vivek, let me talk to you about this first, right? So India talks about sovereignty, but SEDAC still relies on global technology, right, whether that’s chips, systems, you know, software stacks. So realistically, what can India… India built domestically versus what will we always need to source globally, right?

and where should we focus our capability? So question to you as someone who’s really on the cutting edge.

Vivek Kanneja

Okay, so let me answer that both technically and politically correct way. See, when you talk of sovereignty, let’s see what does it really mean. Do you want to be completely independent in the entire vertical, right from silicon up to the application? Is that really possible, right? Let’s say I need to design a GPU. It’s a good aspirational goal. But do I have the wherewithal today to do the entire thing in -house? Probably not. I don’t have the IPs. We can start, definitely. We don’t even have today a fab that can give me a 3 nanometer or below production capabilities and package capabilities. So I think a more pragmatic approach is to have maybe the silicon coming from outside and everything.

Everything above that should be under my control. So you should be able to control all your… critical choke points. So that’s the model that India AI has taken. For example, they have created a farm of GPUs which are available under the control freely or at a reasonable cost for the developers. The models which are being built on top of that are under your control. How those models are going to be orchestrated is under your control. The application that will use those models is under your control. So for me, that is the sovereignty where you are getting the maximum ROI. Should I really be competing against an H100 or a B200 from NVIDIA? Probably not in the short term.

But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will probably have something by the end of 2029 -30. But then, till that time, we really need to have a lot of this entire stack under sovereign control. maybe using chips from outside, whether it is NVIDIA, whether it is Sapphire Rapid or Granite Rapid from Intel, or from AMD. But everything above that should be, all my critical choke points should be under my control.

Amanraj Khanna

So I have to compliment you. That was a very candid and pragmatic answer. So compliments to you. Nitin, can I reframe that question a little bit for you? So, you know, there’s, as you know, in the same vein, there’s a significant policy focus on data sovereignty and data localization as well, right? So when your enterprise customers are making AI infrastructure decisions, how much do these factor into those choices? And how do they compare with cost and performance considerations as well? So has this calculus shifted with some of the policy developments that we’ve seen over the past couple of years in AI?

Nitin Bajaj

Well, I would say not really. So it all depends on the industry and the market. So it all depends on the industry that enterprise is in. for a banking industry, for healthcare industry, data sovereignty is very, very important. For a manufacturing industry or for any other, say, retail or any other industry, they are trying to build use cases on the cloud because that’s where they feel that they can build those use cases very, very quickly. They can simply call the APIs available there and then they see that the performance is much better and they get better accuracy. So it’s a mix of both. Now, then comes again, even in a manufacturing environment, as I was calling out, in an OT environment, a lot of these manufacturing firms would want their data to reside within their perimeter because it is so, so close to them.

Then again, when it is coming to deployment side of it, they’re again looking at an edge deployment which becomes within the perimeter. So it’s a mix of both. Now, finally, again to the point that I was making in the beginning anything that has to scale the price becomes a key driver so it is about okay I have a model that I have kind of fine tuned on the cloud now when I have to take it to deployment can I use something locally which is available today with me without making a lot of investment and then can I scale it which is where when Intel is talking about it we basically kind of focus on frugal AI today the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model so the typical requirements that you will have on the edge are very well suited on this CPU itself and when it comes to a data center side the Xeon 6 processors today are able to run a 20 billion parameter very easily.

They can go up to 80 billion parameters depending on the specific use case. So do I need a GPU in every instance is the first question that we are trying to ask. And today everybody has a CPU in their environment. So can they reutilize that? And can they test it out? Can they look at performance levels? One example that I can kind of discuss with my customers is if you’re looking at, say, a prompt -based engine where you need to do a document search, typically a human eye can read about 10 prompts in a second if somebody is a fast reader. Now if a processor can give you 15 to 20 prompts, is this good enough as a performance for you or do you want 200 prompts in that particular second?

Maybe 15, 20 prompts is good enough. So that’s where the cost versus performance comes in. That’s where one has to be very, very… calculated in terms of what is their end use that they are looking at and what would best suffice that particular usage so again it’s a mix of uh this uh localized uh data versus what can go on to cloud and once you look at uh the scale of it again you have to look at the cost because when you’re scaling it on the cloud the cost may be very very different from what you can get on the on -prem side of it i’m not a kind of proponent of on -prem versus cloud for me i think both of them bring their advantages but what i’m trying to say is the customer has to really look at what is their end use and at what cost they want to

Amanraj Khanna

understood so we need to diversify our approaches and have a product to mission fit which is absolutely critical think of it in the past everybody used to think only mainframes can solve the problem today the applications are not only running are no more running mainframe, they came on to CPU level things and then today we are looking at micro services. So everything has kind of evolved over time and same things are happening on the AI side of it today. Understood. So one quick question that’s on everyone’s mind and to both of you. From each of your respective perspectives, whether that’s government R &D infrastructure or enterprise deployment, is talent and the capability gap still a critical choke point from each of your perspectives?

So Vivek, why don’t I start with you on this?

Vivek Kanneja

It is. It’s unfortunate but yes it is. I mean we do find that we have a set of very bright engineers coming out but most of them are trained in good theoretical understanding of what machine learning is. They are good at mathematics, basic understanding but when it comes to actual deployments on field. I think that’s where we are lacking. Maybe we need to have a serious look at our curriculum in the colleges to see how do I train large models, how do I deploy large models using MLOps, because today, as I said, most of these kids are working on curated data sets. They are working on some standard test cases and test and validation cases.

But when it comes to real life, life is not that rosy. I mean, you have data which is missing, you have data which is skewed, you have data which needs to be cleaned, I have real time constraints, I have other security considerations. So those are not a part of the curriculum. So that’s where I think some capstone projects where you are able to handle beta scale data needs to be put in place. Theory is fine, but still there’s a lot to learn on the practical side.

Nitin Bajaj

I’ll give a different perspective. The way I look at India versus all other countries. in other countries it’s an aging population for us it is a booming population at this point in time which with average age of 13 to 25 or so which is kind of exposed to AI so maybe today we may see some sort of gap in terms of AI capabilities but 2 years down the line or 4 years down the line I think that we bridged very very quickly so we have that benefit of demography here in the short run of course as an individual myself I am also learning from the kids today how AI can be deployed so gaps will be there but I think this is a learning curve for everybody

Amanraj Khanna

Thanks Nitin I want to get to just a couple more questions and then we’ll try and be very quick one I’ve been wanting to ask you and one that I get asked often is the energy and sustainability question you know super computing uses huge amounts of energy can have societal impacts right CDACs super computing whether the you know large data centers that Intel perhaps you know works with So how do we think about energy and sustainability implications, especially in the Indian context? So Vivek, perhaps, with you.

Vivek Kanneja

So from my perspective, I look at it to be addressed at two levels. One is something that we have been doing. Coming from a VLSI background, I can tell you that there are today’s standard techniques which are being used in all associate designs which are power -aware designs. So you have multiple power islands, you have clock tree gating that happens, you switch off those cores which are not being used. So that’s from a design perspective. But when it comes to the platform design, there are smart choices which are being made. For example, if I talk about CDAC solutions, today we are using liquid cooling as well as water cooling in a ratio of almost 70 to 30.

And we are slowly moving the entire thing to a pure liquid pool thing. There are other advanced techniques which use only air cooling. So, ultimately what is your PUE will actually determine. So, typically we see a PUE of around 1 .2 or so as compared to a conventional water -cooled thing which is about 1 .4, 1 .5. There are definitely green norms which are being proposed. I think the question here to be asked and we need to do this kind of a benchmarking across all the models is what is the energy that I spend per token for training or for inferencing. I think there should be critical benchmark. We need to seriously look at how I can optimize my models to be more power aware.

Can I have compressed models which take less energy? So, yes, energy is one of the critical factors and especially hyperscalers would need huge amount of power. I mean, there is this joke that we keep on saying that *** But along with the hyperscaler, you also need to have a small power plant which needs to be designed together with it.

Amanraj Khanna

Thank you. Nitin, to you.

Nitin Bajaj

I’ll have two, three points here. One, from manufacturing point of view, Intel is utilizing the latest technologies like ribbon fed and power via, which improves the power efficiency by 15%. So these are the latest technologies available. Second, I would say that we are running our own data centers. Which are running at a PUE of 1 .06, which is the most efficient power data center PUE that you could see. So there’s a white paper on intel .com. I would appreciate if those interested can look at it. Third, I would say, again, power is a problem. Or is the first and the foremost ingredient to running those data centers. So one has to be, again, very cautious of what kind of models you are running and then where it is landing.

So if you. If you’re able to be, if you’re more judicious in terms of our selection. then of course we can save power in some ways.

Amanraj Khanna

So one final question. I realize that we’re time’s up. One question, let’s put this in a quick sentence if you can. When we assess India’s progress over the next three, five years, what does success look like to each of you? So maybe a sentence.

Vivek Kanneja

Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us.

Amanraj Khanna

Thank you.

Nitin Bajaj

For me it is like when we were like about 150 in terms of data usage. Today we are number one in terms of data usage. When it comes to increasing the generic intelligence of people, so today we are using media for media and for entertainment, all the data that we are consuming. If we can consume the data for improving or improving the general intelligence of public. that will make a large scale impact on the society and India at large. So in two or three years, if a Sabziwala can figure out how they can kind of up -level their state, that would be the best way. And then all the Indic models and all the other use cases that are coming out should be able to support those.

And then there will be a mass -scale deployment of AI across the board.

Amanraj Khanna

Thanks so much, Nitin. And I see that time’s up. I wish we could pick your brain further. This has been a truly fascinating conversation. And thank you for being so candid with all your responses. Thank you. So please join me in thanking both Vivek and Nitin. Thank you. Thank you. I would now like to invite Sangeeta Reddy, Joint Managing Director, Apollo Hospitals, to give her remarks. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Moderator Amanraj Khanna framed India’s AI agenda as a “stack” that must link government vision with the practical needs of enterprises.”

The knowledge base describes Khanna’s framing of India’s AI stack as bridging government vision with enterprise needs [S1] and [S2].

Confirmedmedium

“He noted that the energy here is still so palpable, even after five days.”

Khanna’s remark about the palpable energy after five days is recorded in the transcript [S2] and [S6].

Additional Contextmedium

“He highlighted “Pax Silica announced this morning” as a fresh policy signal.”

Pax Silica is referenced in the summit as a policy concept that amplifies India’s strategic strength, though the source does not specify an announcement that morning [S81].

Confirmedhigh

“Vivek Kanneja is the executive director of the Centre for Development of Advanced Computing (CDAC).”

The session listing identifies Vivek Kanneja (spelled Kaneja) as Executive Director of CDAC (CDAT) [S3].

Confirmedhigh

“Nitin Bajaj is director of sales for conglomerate accounts at Intel India.”

The knowledge base lists Nitin Bajaj as Director of Sales and Marketing at Intel, confirming his senior sales role at Intel India [S3].

External Sources (84)
S1
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S2
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — First I have to my immediate left Mr. Vivek Kanneja. Vivek is the executive director of the Center for Development of Ad…
S3
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Nitin Bajaj: Director, Sales and Marketing, Intel (mentioned for upcoming fireside chat session)
S4
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — – Dr. Vivek Khaneja- Nitin Bajaj Dr. Khaneja advocates for a uniform approach to sovereignty focusing on software contr…
S5
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S6
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — And growing enterprise adoption. Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all wat…
S7
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S8
Data governance — Other states have attempted more direct and protectionist approaches. Some have embedded data localisation measures in l…
S9
The Global Power Shift India’s Rise in AI & Semiconductors — Arguments:India must increase scale of ambition beyond domestic market to compete globally, requiring significant public…
S10
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And you clearly outlined what it means to have a human central leadership, a open, shared, collaborative leadership. Thi…
S11
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S12
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And of course, the technology itself needs to continue to evolve. And so on the data foundation, though, it’s not option…
S13
India’s AI Future Sovereign Infrastructure and Innovation at Scale — AI Adoption Challenges and Production Readiness: Brenno Mello presented concerning statistics that 95% of AI pilots neve…
S14
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S15
From KW to GW Scaling the Infrastructure of the Global AI Economy — NVIDIA’s contribution to India’s AI ecosystem includes sharing reference designs for AI factories, open-sourcing control…
S16
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — ## Introduction and Context Yu Ping Chan: Thank you so much to the organizers for having me here today. So I represent …
S17
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Infrastructure limitations also constrain adoption. Ashutosh noted that compute resources and research talent shortages …
S18
ABOUT THE TASK FORCE — – Public funds can be insufficient or poorly used. – Financial returns can be lower than other investment options. – Mat…
S19
Advancing digital inclusion and human-rights:ROAM-X approach | IGF 2023 — Furthermore, there is a need for improved cross-cutting analysis within the framework, particularly with regard to gende…
S20
Leveraging AI4All_ Pathways to Inclusion — Despite significant progress, several challenges remain unresolved. The fundamental scaling problem persists across sect…
S21
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S22
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S23
Leveraging AI4All_ Pathways to Inclusion — Scaling Challenges and Investment Innovation The discussion revealed that many AI products remain stuck in pilot stage …
S24
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from…
S25
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Brandon Mello introduced a sobering statistic: 95% of AI pilots never reach production deployment. The primary barriers …
S26
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S27
Fireside Chat The Future of AI & STEM Education in India — The discussion revealed substantial practical initiatives already underway in Uttar Pradesh, demonstrating how policy ca…
S28
IGF 2024 Opening Ceremony — This comment provided a structure for subsequent speakers to address specific aspects of AI governance and inequality. I…
S29
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S30
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Because of the skill development, India has a youth, 40 % youth in India. The question is skilling, skilling in India, e…
S31
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Talent and Capability Gap – Demographic advantage will rapidly close skill gaps; learning curve expected to be short
S32
Science AI & Innovation_ India–Japan Collaboration Showcase — This comment crystallizes a concrete, measurable vision for AI’s social impact – transforming bureaucratic complexity in…
S33
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — The discussion concluded with recognition that India’s AI healthcare future depends more on addressing human factors, re…
S34
AI and Data Driving India’s Energy Transformation for Climate Solutions — A critical theme throughout the discussion was the importance of defining success metrics and solution frameworks from t…
S35
Powering AI Global Leaders Session AI Impact Summit India — “And what that really means is the technology continues to accelerate.”[14]. “going to become even faster and faster.”[1…
S36
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S37
Elevating AI skills for all — This data-driven insight provided concrete justification for Microsoft’s $4 billion investment and 20 million person tra…
S38
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S39
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S40
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S41
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Infrastructure limitations also constrain adoption. Ashutosh noted that compute resources and research talent shortages …
S42
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S43
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Khanna highlights the gap between AI announcements and actual implementation at scale. He emphasizes that while there ha…
S44
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And you clearly outlined what it means to have a human central leadership, a open, shared, collaborative leadership. Thi…
S45
Indias AI Leap Policy to Practice with AIP2 — And we’ve seen with the GDPR framework. For example, that that has had a limiting effect on the African continent. So I …
S46
The Global Power Shift India’s Rise in AI & Semiconductors — Arguments:India must increase scale of ambition beyond domestic market to compete globally, requiring significant public…
S47
The Global Power Shift India’s Rise in AI & Semiconductors — Both speakers acknowledge that while India has become excellent at fast-following, true leadership requires scaling ambi…
S48
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Dr. Khaneja outlined CDAC’s substantial progress in building India’s supercomputing backbone through the PARAM series. T…
S49
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — He has over 28 years of global experience in sales and technology leadership and orchestrates a broad partner ecosystem….
S50
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And of course, the technology itself needs to continue to evolve. And so on the data foundation, though, it’s not option…
S51
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from…
S52
Waves of infrastructure Open Systems Open Source Open Cloud — Hey, good afternoon, everyone. So, let’s address the… hard truth out there. 90 % of Gen -AI pilots never make it to pr…
S53
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S54
From KW to GW Scaling the Infrastructure of the Global AI Economy — NVIDIA’s contribution to India’s AI ecosystem includes sharing reference designs for AI factories, open-sourcing control…
S55
AI Safety at the Global Level Insights from Digital Ministers Of — This comment challenges the prevailing narrative around AI sovereignty, arguing that isolationist approaches actually un…
S56
Panel Discussion Data Sovereignty India AI Impact Summit — Both speakers agree that sovereignty should involve strategic partnerships and collaboration rather than complete self-r…
S57
Keynote by Uday Shankar Vice Chairman_JioStar India — “India has some of the finest creative and technical talent anywhere in the world.”[31]. “And this has created a paradox…
S58
https://app.faicon.ai/ai-impact-summit-2026/fireside-chat-the-future-of-ai-stem-education-in-india — everyone talking about the coding and programming and AI tools everywhere around from statistics we as data center build…
S59
Uganda’s National ICT Policy (2014) — Addressing cross-cutting issues such as sustainability, gender, youth, and disability.
S60
Contents — – Docquier, F. 2014. The brain drain from developing countries . Available at: https://wol.iza.org [30 Sep. 2019]. – Eco…
S61
https://app.faicon.ai/ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-frameworks-standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S62
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S63
Scaling Innovation Building a Robust AI Startup Ecosystem — The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards cer…
S64
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S65
Friday Opening Ceremony: Summit of the Future Action Days — The overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while …
S66
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S67
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S68
Panel Discussion Data Sovereignty India AI Impact Summit — The tone was collaborative and pragmatic throughout, with panelists sharing real-world experiences and solutions rather …
S69
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — The discussion maintained a consistently serious and urgent tone throughout, with speakers expressing genuine concern ab…
S70
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S71
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S72
Shaping the Future AI Strategies for Jobs and Economic Development — I’ll just add on the energy part, right? So you heard also, Narendra talk about putting data centers in space, right? So…
S73
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers were solu…
S74
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignmen…
S75
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But whether we ‘need’ it or not, it exists and students have access to it. The question becomes: how do we respond to th…
S76
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S77
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S78
High-level dialogue on Shaping the future of the digital economy (UNCTAD) — The analysis ends on an optimistic note, with the belief that it is possible to design technology for societal benefit. …
S79
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S80
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — I would give them the cliff notes. It is an astonishing thing to make the future sound boring, and yet a lot of AI books…
S81
Keynote Adresses at India AI Impact Summit 2026 — Peace doesn’t come from hoping adversaries will play fair. We all know they won’t. Peace comes through strength. India u…
S82
https://app.faicon.ai/ai-impact-summit-2026/panel-discussion-ai-in-digital-public-infrastructure-dpi-india-ai-impact-summit — And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, f…
S83
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Thank you everyone We are up against Jan, we are up against her boss. So, but, let’s have fun in this panel. And the bro…
S84
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — I’m going to provide a brief introduction and then I’ll have my panelists introduce themselves and we’ll get into the di…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vivek Kanneja
6 arguments157 words per minute1448 words551 seconds
Argument 1
Compute capacity & usage – CDAC currently operates ~48 PFLOPS, expanding to ~100 PFLOPS, serving ~15,000 researchers and MSMEs for workloads such as drug discovery, weather prediction, CFD, etc.
EXPLANATION
Vivek explains that CDAC, under the National Supercomputing Mission, has built the PARAM series of supercomputers delivering about 48 PFLOPS today, with plans to reach roughly 100 PFLOPS by year‑end. The infrastructure is primarily used by researchers and small‑medium enterprises for high‑performance scientific and engineering workloads.
EVIDENCE
He describes CDAC’s mandate to build supercomputing capacity, the development of the PARAM series, and the current 48 PFLOPS installed across the country, which will be augmented to about 100 PFLOPS with 60 installations by the end of the year [45-52]. He notes that around 15,000 researchers run jobs on the National Knowledge Network and that start-ups/MSMEs access resources via the Paramuthkarsh facility, using the clusters for drug discovery, bioinformatics, protein folding, molecular modeling, weather prediction, oil exploration, finite-element and computational-fluid-dynamics problems [53-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The CDAC supercomputing backbone provides 48 PFLOPS today, will grow to about 100 PFLOPS with 60 installations, and serves roughly 15,000 researchers across the National Knowledge Network [S1] [S2].
MAJOR DISCUSSION POINT
Infrastructure capacity
Argument 2
POC‑to‑production challenges – POCs succeed on curated data, but scaling fails due to data quality, lack of MLOps expertise, and ROI decisions
EXPLANATION
Vivek points out that many projects stall after successful proof‑of‑concepts because real‑world data is messy and organisations lack mature MLOps processes. The decision to move to production also hinges on cost‑benefit analyses and infrastructure choices.
EVIDENCE
He observes that people are happy with POCs trained on curated datasets, but when confronted with real-life, unclean data and the need for proper MLOps, the projects falter; additionally, ROI considerations force choices about on-prem GPUs versus simpler VM setups [95-102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists note that while POCs work on clean datasets, real-world messy data, missing or skewed records, and a shortage of MLOps engineers hinder scaling, with ROI considerations influencing deployment choices [S1] [S2].
MAJOR DISCUSSION POINT
Transition from pilot to scale
DISAGREED WITH
Nitin Bajaj
Argument 3
Pragmatic sovereignty approach – Full independence unrealistic; India will import silicon while keeping control over software, models, and applications; RISC‑V GPU planned for 2029‑30
EXPLANATION
Vivek argues that complete technological sovereignty is not feasible now because India lacks the IP and fab capabilities for advanced chips. A realistic path is to import silicon while retaining control over higher‑level stack components, and CDAC is developing its own RISC‑V GPU for the late 2020s.
EVIDENCE
He explains that designing a GPU requires IP and a fab capable of sub-3 nm processes, which India does not possess, so the pragmatic model is to use external silicon but keep software, models, orchestration and applications under domestic control; CDAC plans a RISC-V based GPGPU by 2029-30 [115-138].
MAJOR DISCUSSION POINT
Sovereignty strategy
Argument 4
Curriculum & practical skills deficit – Graduates strong in theory but lack hands‑on MLOps, data‑cleaning, and real‑world deployment experience; calls for curriculum reform
EXPLANATION
Vivek notes that while Indian engineering graduates have solid theoretical foundations, they are not equipped with practical skills needed for deploying large AI models, such as MLOps, data preprocessing, and handling real‑time constraints. He suggests revamping curricula and adding capstone projects to bridge this gap.
EVIDENCE
He states that many engineers are good at mathematics and basic ML concepts but lack experience in deploying large models, handling noisy or skewed data, real-time constraints, and security considerations, and recommends introducing capstone projects that address beta-scale data challenges [173-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for revamping engineering curricula to include hands-on MLOps, data-pre-processing and capstone projects with beta-scale data are documented in the summit notes [S1] [S2].
MAJOR DISCUSSION POINT
Talent gap
DISAGREED WITH
Nitin Bajaj
Argument 5
Power‑aware hardware & cooling – Uses power‑island design, clock‑gating, liquid cooling; achieves PUE ~1.2; advocates token‑per‑energy benchmarks and model compression
EXPLANATION
Vivek describes how CDAC’s hardware designs incorporate power‑saving techniques such as power islands and clock gating, and how its supercomputing platforms employ liquid and water cooling to achieve a low Power Usage Effectiveness (PUE) around 1.2. He calls for benchmarking energy per token and compressing models to improve efficiency.
EVIDENCE
He details power-aware design practices like multiple power islands and clock-tree gating, and notes CDAC’s use of liquid cooling (70% liquid, 30% water) moving toward pure liquid cooling, resulting in a PUE of about 1.2 compared with conventional 1.4-1.5 [188-199]; he also proposes benchmarking energy per token and using compressed models to reduce power consumption [200-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
CDAC’s hardware employs power islands, clock-tree gating and a 70 % liquid-cooling scheme, achieving a PUE around 1.2; the speaker also proposes energy-per-token benchmarking and model compression for efficiency [S1] [S2].
MAJOR DISCUSSION POINT
Energy sustainability
Argument 6
Widespread workflow integration – Success = AI embedded in many workflows, simplifying everyday life
EXPLANATION
Vivek envisions that success over the next few years will be measured by AI being deployed across a broad range of business and societal workflows, making daily activities easier and more enjoyable.
EVIDENCE
He succinctly states, “Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us” [223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel agrees that success should be measured by real-world impact and widespread AI adoption across workflows rather than purely technical milestones [S1].
MAJOR DISCUSSION POINT
Future vision
N
Nitin Bajaj
5 arguments163 words per minute1810 words665 seconds
Argument 1
ROI & deployment‑model uncertainty – Enterprises struggle to choose between on‑prem, cloud, or edge, assess ROI, and keep pace with rapidly changing models and ecosystem
EXPLANATION
Nitin explains that Indian enterprises face difficulty selecting the optimal deployment model—on‑premises, cloud, or edge—while trying to determine the return on investment. The fast‑evolving AI model landscape and a fragmented ecosystem add to the uncertainty.
EVIDENCE
He outlines that companies evaluate use cases such as smart manufacturing and retail, then wrestle with speed, ROI, whether to use on-prem or cloud, open APIs, final deployment cost, and edge versus centralized architectures, noting that no single answer exists given the rapidly changing silicon, OS, and model ecosystem [70-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Enterprises face decision-paralysis over on-premise, cloud or edge deployments, ROI calculations and the fast-evolving model landscape, as highlighted in the discussion [S1] [S2].
MAJOR DISCUSSION POINT
Adoption barriers
DISAGREED WITH
Vivek Kanneja
Argument 2
Industry‑specific data‑sovereignty impact – Data localisation critical for banking/healthcare, less so for manufacturing/retail; cost‑performance trade‑offs dominate
EXPLANATION
Nitin says that the importance of data sovereignty varies by sector: it is essential for regulated industries like banking and healthcare, while manufacturing and retail prioritize speed and cost, often opting for cloud solutions.
EVIDENCE
He notes that banking and healthcare demand strict data sovereignty, whereas manufacturing and retail favor cloud APIs for faster development and better performance; he also mentions OT environments preferring on-prem or edge deployment, but ultimately cost becomes the key driver [147-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-sovereignty requirements are deemed essential for regulated sectors like banking and healthcare, while manufacturing and retail prioritize performance and cost, often opting for cloud solutions [S1] [S2].
MAJOR DISCUSSION POINT
Data governance
Argument 3
Demographic advantage & rapid up‑skilling – Young, AI‑exposed population will close the skill gap quickly; personal learning curve mirrors industry
EXPLANATION
Nitin highlights India’s youthful demographic, with a large proportion of the population already exposed to AI, as a catalyst for quickly bridging the talent gap. He also mentions his own learning journey as evidence of rapid up‑skilling.
EVIDENCE
He observes that India’s average age of 13-25 provides a demographic advantage that will help close AI capability gaps within a few years, and he personally is learning from younger colleagues, indicating a fast learning curve [184-185].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit notes India’s youthful demographic and rapid up-skilling as key factors that will quickly narrow the AI talent gap [S2].
MAJOR DISCUSSION POINT
Talent dynamics
Argument 4
Intel’s efficiency measures – Intel data centers achieve PUE 1.06; employs ribbon‑fed power‑via technology for 15% efficiency gain; stresses judicious model selection
EXPLANATION
Nitin describes Intel’s hardware and data‑center strategies to improve energy efficiency, citing a very low PUE of 1.06 and a new ribbon‑fed power‑via design that boosts efficiency by 15 %. He also emphasizes the need for careful model selection to conserve power.
EVIDENCE
He states that Intel’s latest manufacturing uses ribbon-fed power-via technology for a 15 % efficiency improvement, and Intel’s own data centers run at a PUE of 1.06, with a white paper available on intel.com; he adds that selecting appropriate models is essential for power savings [207-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Intel’s data-center efficiency achievements (PUE 1.06) and the ribbon-fed power-via design delivering a 15 % gain are cited, along with advice on careful model selection to save power [S1] [S2].
MAJOR DISCUSSION POINT
Energy efficiency
Argument 5
Mass‑scale adoption & societal impact – Success = India leads in data usage for intelligence, enabling even small vendors like a “Sabziwala” to leverage AI; Indic models support broad use cases
EXPLANATION
Nitin envisions that within a few years India will become the top consumer of data, using AI to raise the general intelligence of the population, including small vendors, with Indic language models supporting diverse applications.
EVIDENCE
He remarks that India moved from rank 150 to number one in data usage, aims to improve public intelligence through media consumption, gives the example of a “Sabziwala” (vegetable vendor) up-leveling their business with AI, and expects Indic models to support mass-scale deployment across the country [225-231].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker envisions India becoming the top consumer of data, empowering small vendors such as a “Sabziwala” through AI and Indic language models for mass-scale deployment [S2].
MAJOR DISCUSSION POINT
Future impact
A
Amanraj Khanna
2 arguments154 words per minute1231 words478 seconds
Argument 1
Policy‑to‑deployment gap – Highlights that massive policy announcements and investments must translate into real‑world deployment and scale
EXPLANATION
Amanraj stresses that while India has announced ambitious AI policies and attracted large investments, the real challenge lies in converting those announcements into tangible deployments that achieve scale.
EVIDENCE
He remarks that “announcement is one thing. Deployment and then achieving scale quite enough” and frames the panel’s purpose as translating vision into adoption and scale [17-20]; he also asks what it takes to move from vision to adoption [21-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses that while policy announcements are abundant, the real challenge lies in converting them into tangible deployments and scaling them [S1].
MAJOR DISCUSSION POINT
Implementation gap
Argument 2
Policy perspective on sovereignty – Raises the tension between sovereign aspirations and reliance on global chips/software, seeking realistic focus areas
EXPLANATION
Amanraj points out the paradox that India talks about AI sovereignty while still depending on foreign technology for chips and software, and asks where the country should focus its domestic capability building.
EVIDENCE
He notes that “India talks about sovereignty, but SEDAC still relies on global technology… what can India build domestically versus what will we always need to source globally, and where should we focus our capability?” [108-114].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion points out the paradox of AI sovereignty claims versus dependence on foreign silicon and software, urging a focus on realistic domestic capabilities [S1] [S2].
MAJOR DISCUSSION POINT
Sovereignty vs dependency
Agreements
Agreement Points
POC‑to‑production challenges hinder scaling of AI projects
Speakers: Vivek Kanneja, Nitin Bajaj
POC‑to‑production challenges — POCs succeed on curated data, but scaling fails due to data quality, lack of MLOps expertise, and ROI decisions ROI & deployment‑model uncertainty — Enterprises struggle to choose between on‑prem, cloud, or edge, assess ROI, and keep pace with rapidly changing models and ecosystem
Both speakers note that while proof-of-concepts work on clean, curated datasets, moving to production stalls because real-world data is messy, organisations lack mature MLOps skills, and cost-benefit (ROI) calculations dictate deployment choices [95-102][70-84].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple studies highlight that 80-95% of AI pilots stall before production due to data silos, governance gaps and organizational barriers, echoing the scaling challenges noted in the Global AI Policy Framework and prior summit discussions [S20][S21][S22][S23][S24][S25].
Energy efficiency and sustainability are critical for AI infrastructure
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware hardware & cooling — Uses power‑island design, clock‑gating, liquid cooling; achieves PUE ~1.2; advocates token‑per‑energy benchmarks and model compression Intel’s efficiency measures — Intel data centers achieve PUE 1.06; employs ribbon‑fed power‑via technology for 15% efficiency gain; stresses judicious model selection
Both emphasize the need for low-power designs and efficient cooling, citing low PUE values (≈1.2 for CDAC, 1.06 for Intel) and hardware innovations to reduce energy consumption [188-199][207-214].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for greener AI aligns with recent high-performance computing sustainability initiatives and the Green AI movement, which call for reduced power-usage-effectiveness (PUE) in data centers [S38][S39].
Success is measured by widespread AI deployment that improves everyday life and empowers small enterprises
Speakers: Vivek Kanneja, Nitin Bajaj
Widespread workflow integration — Success = AI embedded in many workflows, simplifying life Mass‑scale adoption & societal impact — Success = India leads in data usage, enabling even a ‘Sabziwala’ to leverage AI; Indic models support broad use cases
Both describe success as AI being embedded across many workflows and sectors, from large enterprises to small vendors, delivering tangible societal benefits [223][225-231].
POLICY CONTEXT (KNOWLEDGE BASE)
Success metrics emphasizing tangible societal impact have been advocated in India-Japan AI collaboration and AI for Bharat’s health, stressing clear, outcome-based indicators for public services [S32][S33][S34].
India faces a talent gap in practical AI skills, but demographic factors offer rapid up‑skilling potential
Speakers: Vivek Kanneja, Nitin Bajaj
Curriculum & practical skills deficit — Graduates strong in theory but lack hands‑on MLOps, data‑cleaning, and real‑world deployment experience; calls for curriculum reform Demographic advantage & rapid up‑skilling — Young population exposed to AI will close the skill gap quickly; personal learning curve mirrors industry
Vivek highlights the shortage of hands-on expertise and need for curriculum changes, while Nitin points to India’s youthful demographic as a catalyst for quickly bridging that gap [173-183][184-185].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of India’s workforce highlight a large youth population and fast-track training programs, with industry leaders projecting a short learning curve to close the capability gap [S30][S31][S35][S37][S41].
Policy announcements must translate into concrete AI deployments at scale
Speakers: Amanraj Khanna, Vivek Kanneja, Nitin Bajaj
Policy‑to‑deployment gap — Highlights that massive policy announcements and investments must translate into real‑world deployment and scale POC‑to‑production challenges — … ROI & deployment‑model uncertainty — …
The moderator stresses that announcements are insufficient without deployment, and both panelists echo this by describing practical barriers that prevent pilots from scaling, underscoring the need to move from policy to implementation [17-20][95-102][70-84].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for actionable policy have been echoed in recent AI impact summits and the IGF 2024 agenda, emphasizing the need to move from declarations to production-grade implementations [S21][S27][S42].
Similar Viewpoints
Both see the transition from pilot to production as the main bottleneck, driven by data quality, MLOps maturity, and cost considerations [95-102][70-84].
Speakers: Vivek Kanneja, Nitin Bajaj
POC‑to‑production challenges — POCs succeed on curated data, but scaling fails due to data quality, lack of MLOps expertise, and ROI decisions ROI & deployment‑model uncertainty — Enterprises struggle to choose between on‑prem, cloud, or edge, assess ROI, and keep pace with rapidly changing models and ecosystem
Both prioritize energy‑efficient hardware and cooling solutions, providing concrete low‑PUE metrics as evidence of sustainable AI infrastructure [188-199][207-214].
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware hardware & cooling — Uses power‑island design, clock‑gating, liquid cooling; achieves PUE ~1.2; advocates token‑per‑energy benchmarks and model compression Intel’s efficiency measures — Intel data centers achieve PUE 1.06; employs ribbon‑fed power‑via technology for 15% efficiency gain; stresses judicious model selection
Both define success in terms of mass‑scale AI adoption that improves daily life and benefits even small, informal sector actors [223][225-231].
Speakers: Vivek Kanneja, Nitin Bajaj
Widespread workflow integration — Success = AI embedded in many workflows, simplifying life Mass‑scale adoption & societal impact — Success = India leads in data usage, enabling even a ‘Sabziwala’ to leverage AI; Indic models support broad use cases
Both acknowledge a current talent gap while seeing a path to close it—Vivek through curriculum reform, Nitin through demographic momentum [173-183][184-185].
Speakers: Vivek Kanneja, Nitin Bajaj
Curriculum & practical skills deficit — Graduates strong in theory but lack hands‑on MLOps, data‑cleaning, and real‑world deployment experience; calls for curriculum reform Demographic advantage & rapid up‑skilling — Young population exposed to AI will close the skill gap quickly; personal learning curve mirrors industry
All three agree that policy and investment announcements are only valuable if they lead to actual AI deployments at scale, and current practical barriers must be addressed [17-20][95-102][70-84].
Speakers: Amanraj Khanna, Vivek Kanneja, Nitin Bajaj
Policy‑to‑deployment gap — Highlights that massive policy announcements and investments must translate into real‑world deployment and scale POC‑to‑production challenges — … ROI & deployment‑model uncertainty — …
Unexpected Consensus
Both a government research institute (CDAC) and a private chip maker (Intel) independently prioritize energy efficiency and provide comparable low‑PUE metrics
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware hardware & cooling — Uses power‑island design, clock‑gating, liquid cooling; achieves PUE ~1.2; advocates token‑per‑energy benchmarks and model compression Intel’s efficiency measures — Intel data centers achieve PUE 1.06; employs ribbon‑fed power‑via technology for 15% efficiency gain; stresses judicious model selection
Despite representing different sectors (public research vs. private industry), both speakers converge on the importance of energy-aware design and cite concrete low-PUE figures, indicating an unexpected alignment on sustainability priorities [188-199][207-214].
POLICY CONTEXT (KNOWLEDGE BASE)
The Intel-CDAC dialogue highlighted shared commitments to low power-usage-effectiveness designs, reflecting broader industry trends toward sustainable AI hardware [S26][S38][S39].
Overall Assessment

The discussion shows strong convergence on four main fronts: (1) the difficulty of moving AI pilots to production due to data quality and ROI; (2) the centrality of cost‑effective, energy‑efficient infrastructure; (3) a shared vision that AI success means mass‑scale, everyday deployment; and (4) recognition of a talent gap that can be mitigated by curriculum reforms and demographic advantages. All speakers align on the need to turn policy and investment into tangible, sustainable AI outcomes.

High consensus – the speakers largely reinforce each other’s points, suggesting a unified understanding of the practical, economic, and sustainability challenges that must be addressed for India’s AI ambitions to materialise.

Differences
Different Viewpoints
Root causes and remedies for the AI talent gap
Speakers: Vivek Kanneja, Nitin Bajaj
Curriculum & practical skills deficit – Graduates strong in theory but lack hands‑on MLOps, data‑cleaning, and real‑world deployment experience; calls for curriculum reform Demographic advantage & rapid up‑skilling – Young population will close the skill gap quickly; personal learning from younger colleagues
Vivek argues that the talent shortage stems from university curricula that focus on theory and omit practical MLOps and data-handling skills, recommending capstone projects and curriculum overhaul [173-183]. Nitin counters that India’s youthful demographic will naturally bridge the gap within a few years, citing his own learning from younger peers as evidence [184-185].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions identify education gaps, industry-academia disconnects, and the need for large-scale training initiatives as primary causes, with proposed remedies including public-private skilling partnerships [S30][S31][S35][S41].
Primary obstacles to moving AI projects from pilot to production
Speakers: Vivek Kanneja, Nitin Bajaj
POC‑to‑production challenges – POCs succeed on curated data, but scaling fails due to data quality, lack of MLOps expertise, and ROI decisions ROI & deployment‑model uncertainty – Enterprises struggle to choose between on‑prem, cloud, or edge, assess ROI, and keep pace with rapidly changing models and ecosystem
Vivek attributes the stall after successful POCs to messy real-world data, insufficient MLOps capabilities, and cost-benefit calculations that affect infrastructure choices such as GPUs versus simple VM setups [95-102]. Nitin emphasizes that enterprises face uncertainty in selecting the optimal deployment model (on-prem, cloud, edge) and that the fast-evolving AI ecosystem makes ROI assessments difficult, leading to pilot stagnation [70-84][93-94].
POLICY CONTEXT (KNOWLEDGE BASE)
Research consistently cites data governance failures, lack of scalable infrastructure, and economic constraints as the main hurdles preventing pilots from reaching production [S22][S24][S25][S23][S20].
Metrics for judging AI success in India
Speakers: Vivek Kanneja, Nitin Bajaj
Widespread workflow integration – Success = AI being actually deployed in a lot of workflows and making life simpler Mass‑scale adoption & societal impact – Success = India leads in data usage, enabling even small vendors like a “Sabziwala” through Indic models
Vivek defines success succinctly as AI being embedded across many workflows, improving everyday life [223]. Nitin envisions success as India becoming the top consumer of data, using AI to raise public intelligence and empower small businesses, supported by Indic language models [225-231].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders have called for concrete, outcome-based metrics such as service simplification ratios and adoption rates to evaluate AI impact, as outlined in recent collaborative frameworks [S32][S34][S33].
Unexpected Differences
Speed at which the AI talent gap will be closed
Speakers: Vivek Kanneja, Nitin Bajaj
Curriculum & practical skills deficit – calls for systemic education reform Demographic advantage & rapid up‑skilling – predicts a quick natural closure of the gap
While both recognize a talent shortage, Vivek’s call for deep curriculum changes contrasts sharply with Nitin’s optimistic view that India’s young population will naturally resolve the issue within a few years, an unexpected divergence given their shared focus on building AI capacity [173-183][184-185].
POLICY CONTEXT (KNOWLEDGE BASE)
Projections vary, but several leaders argue that India’s demographic advantage and accelerated training programs could significantly narrow the gap within a few years [S31][S35][S37].
Overall Assessment

The panel largely concurs on the need for AI scaling, talent development, and sustainability, but they diverge on the perceived root causes and preferred solutions—Vivek emphasizes structural reforms (curriculum, MLOps, controlled sovereignty) while Nitin highlights market‑driven choices, demographic strengths, and hardware efficiencies.

Moderate disagreement: differences are more about emphasis and implementation pathways than fundamental contradictions, suggesting that coordinated policy and industry actions will need to reconcile education reform with leveraging demographic momentum and provide clear guidance on ROI, deployment models, and energy benchmarks.

Partial Agreements
The speakers share the goal of achieving large‑scale AI adoption and building a skilled workforce, but they diverge on what they see as the main obstacles and the most effective solutions to overcome them [173-183][70-84][184-185].
Speakers: Vivek Kanneja, Nitin Bajaj
Both agree that scaling AI from pilot to production is essential, but differ on the primary bottlenecks and remedies (Vivek focuses on data quality and MLOps; Nitin focuses on ROI and deployment‑model choices) Both acknowledge a talent gap, yet Vivek calls for curriculum reform while Nitin points to demographic advantage for rapid up‑skilling
Both agree that sustainability is critical, but propose different measurement approaches and technical solutions to reduce power consumption [200-204][207-214].
Speakers: Vivek Kanneja, Nitin Bajaj
Both stress the importance of energy efficiency for AI infrastructure, yet Vivek proposes token‑per‑energy benchmarking and advanced cooling, while Nitin highlights hardware PUE improvements and ribbon‑fed power‑via technology
Takeaways
Key takeaways
India has built a substantial AI compute foundation (≈48 PFLOPS, expanding to ≈100 PFLOPS) serving researchers, MSMEs, and national missions. Enterprise AI adoption is stalled by uncertainty over deployment models (on‑prem, cloud, edge), ROI calculations, and rapid model turnover. Proof‑of‑concepts often fail to scale because of data‑quality issues, lack of MLOps expertise, and unclear cost‑benefit decisions. Full technological sovereignty is unrealistic in the short term; a pragmatic approach is to import silicon while retaining control over software, models, and applications, with a home‑grown RISC‑V GPU targeted for 2029‑30. Data‑sovereignty requirements vary by industry; banking/healthcare prioritize localisation, while manufacturing/retail lean toward cloud for speed and performance. Talent gap exists: graduates are strong theoretically but lack hands‑on experience in large‑model training, deployment, and data engineering; demographic advantage may close this gap quickly. Energy consumption is a critical concern; CDAC employs power‑aware design, liquid cooling, and aims for low PUE (~1.2), while Intel reports PUE of 1.06 and uses efficiency‑focused hardware. Success in the next 3‑5 years is envisioned as AI embedded in everyday workflows across sectors, enabling even small vendors to leverage AI, supported by Indic models and mass‑scale deployments.
Resolutions and action items
Call for curriculum reform in engineering colleges to incorporate practical MLOps, data‑cleaning, and large‑model deployment training. Proposal to establish benchmark metrics such as energy‑per‑token for training and inference to guide sustainable AI development. Suggestion to pursue a ‘frugal AI’ strategy leveraging CPU‑centric solutions (Intel Core/Ultra and Xeon) for many workloads to reduce reliance on GPUs. Commitment by CDAC to develop a RISC‑V based GPGPU by the 2029‑30 timeframe.
Unresolved issues
Precise decision framework for enterprises to choose between on‑prem, cloud, or edge deployments given evolving model landscape. Quantitative methodology to assess ROI of AI projects beyond qualitative statements. Extent and timeline for achieving full data‑sovereignty while still depending on foreign silicon and software stacks. Scalable pathways to bridge the talent gap beyond curriculum changes (e.g., industry‑academia partnerships, certification programs). Standardized energy‑efficiency benchmarks and policies for AI training/inference across Indian institutions.
Suggested compromises
Adopt a pragmatic sovereignty model: import silicon (e.g., NVIDIA, Intel, AMD) while keeping control over software, models, and applications. Employ a hybrid deployment approach—mixing cloud for rapid prototyping with on‑prem or edge for sensitive or cost‑critical workloads. Utilize existing CPU infrastructure for many AI workloads (frugal AI) to balance performance needs against cost and energy consumption. Combine power‑aware hardware design with advanced cooling (liquid/air) to achieve lower PUE while acknowledging the need for substantial power infrastructure.
Thought Provoking Comments
The biggest gap today is what to use, whether to use it on‑prem or go on cloud, use open APIs, then once use cases are ready, what is the final cost of that deployment? And then the third angle is whether to centralize or take it to the edge. There is no single formula – the AI journey is changing so fast that enterprises are still trying to figure out the best deployment model and ROI.
He succinctly captured the core practical dilemma facing Indian enterprises – the trade‑off between on‑prem, cloud, edge, cost, and speed of model evolution – highlighting why many pilots never scale.
This comment shifted the conversation from high‑level policy and infrastructure to the concrete pain points of enterprises, prompting Vivek to elaborate on why POCs stall and setting up the later discussion on frugal AI and deployment choices.
Speaker: Nitin Bajaj
People are very happy with the POCs. They can train them on curated data sets. But once it hits real‑life situations where the data needs to be cleaned, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.
He identified a systemic bottleneck – the gap between proof‑of‑concept success and production‑grade MLOps – and linked it directly to ROI concerns raised earlier.
This interjection acted as a turning point, deepening the analysis of why pilots fail, reinforcing Nitin’s earlier point, and leading the panel to discuss talent gaps, curriculum changes, and the need for operational expertise.
Speaker: Vivek Kanneja
When you talk of sovereignty, do you want to be completely independent from silicon up to the application? That’s not possible today. A pragmatic approach is to source silicon externally but keep everything above that – the GPU farm, the models, the orchestration, the applications – under our control.
He provided a realistic, nuanced answer to a politically charged question, balancing aspirational goals with current capabilities and outlining a concrete roadmap (e.g., RISC‑V GPU by 2029‑30).
This comment opened a new thread on the limits of domestic technology, influencing Nitin’s later remarks on data sovereignty and cost, and framing the broader discussion about how India can achieve AI sovereignty without reinventing the entire stack.
Speaker: Vivek Kanneja
We have a set of very bright engineers, but most are trained theoretically. When it comes to actual deployments – handling missing or skewed data, real‑time constraints, security – the curriculum lacks that practical side. We need capstone projects that handle beta‑scale data.
He pinpointed the talent and capability gap as a structural issue, linking education to the earlier MLOps and POC challenges, and suggesting concrete curriculum reforms.
This comment broadened the conversation from infrastructure and cost to human capital, prompting Nitin to contrast with India’s demographic advantage and reinforcing the need for a skilled workforce to realize scale.
Speaker: Vivek Kanneja
Energy is one of the critical factors. We use power‑aware designs, liquid cooling (70/30 ratio), aiming for PUE around 1.2 versus 1.4 for conventional water‑cooled systems. We should benchmark energy per token for training and inference and look at compressed models.
He introduced sustainability metrics (PUE, energy per token) into the AI infrastructure conversation, moving beyond performance to environmental impact and operational efficiency.
This comment added a new dimension to the discussion, leading Nitin to showcase Intel’s own low‑PUE data centers (1.06) and reinforcing the theme of frugal, efficient AI deployment.
Speaker: Vivek Kanneja
We focus on frugal AI – using Intel CPUs that combine GPU, NPU, and CPU in a single processor to run 7‑8 billion‑parameter models at the edge, and Xeon processors that can handle up to 80 billion‑parameter models in data centers. The question is whether a GPU is needed for every instance.
He reframed the cost‑performance debate by presenting a concrete technology‑driven solution (frugal AI) that leverages existing CPU resources, challenging the assumption that GPUs are always required.
This shifted the dialogue from abstract ROI concerns to tangible hardware strategies, influencing the later sustainability discussion and supporting the argument that India can scale AI without massive new GPU imports.
Speaker: Nitin Bajaj
India has a booming young population (average age 13‑25) exposed to AI. While there may be a short‑term capability gap, we will bridge it quickly in 2‑4 years, and I’m learning from the kids myself.
He offered a demographic counter‑point to the talent‑gap narrative, suggesting a natural, rapid up‑skilling pipeline driven by a youthful workforce.
This optimistic view balanced the earlier concerns about skill shortages, adding a hopeful tone and implying that policy and industry can leverage this demographic dividend to accelerate AI adoption.
Speaker: Nitin Bajaj
Overall Assessment

The discussion pivoted around three core tensions: translating policy and infrastructure into enterprise scale, navigating sovereignty versus global dependence, and bridging talent and sustainability gaps. Nitin’s articulation of deployment‑model dilemmas and Vivek’s candid exposition of POC‑to‑production failures acted as catalysts, steering the conversation from high‑level announcements to gritty operational realities. Vivek’s pragmatic sovereignty answer and his emphasis on energy efficiency introduced new strategic dimensions, while Nitin’s frugal‑AI proposition and demographic optimism offered concrete pathways forward. Collectively, these comments deepened the analysis, reframed challenges as solvable trade‑offs, and shaped a narrative that India’s AI future hinges on smart infrastructure choices, adaptable talent pipelines, and cost‑effective, sustainable deployment models.

Follow-up Questions
Can you elaborate on the concept of ‘frugal AI’ and how Intel proposes it to the industry?
Nitin mentioned ‘frugal AI’ briefly and indicated he would discuss it later, signalling a need for deeper explanation.
Speaker: Nitin Bajaj
What benchmarks should be established for energy consumption per token during model training and inference?
Vivek suggested the need for a critical benchmark to measure energy per token, highlighting a gap in current evaluation metrics.
Speaker: Vivek Kanneja
How can Indian engineering curricula be updated to include practical MLOps, large‑model deployment, and handling of real‑world data challenges?
Vivek identified a talent gap where graduates lack hands‑on experience with MLOps and messy data, indicating a need for curriculum reform.
Speaker: Vivek Kanneja
What progress is being made on India’s domestic GPU development based on RISC‑V, and what are the realistic timelines and capabilities?
Vivek noted CDAC is designing a RISC‑V GPU expected by 2029‑30, prompting further investigation into feasibility and roadmap.
Speaker: Vivek Kanneja
What are the comparative ROI models for on‑premises, edge, and cloud AI deployments across different industry sectors?
Nitin emphasized ROI as a decisive factor but did not provide detailed models, indicating a need for sector‑specific ROI analysis.
Speaker: Nitin Bajaj
How does data sovereignty impact AI deployment decisions in banking, healthcare, manufacturing, and retail?
Nitin observed varying importance of data sovereignty across sectors but did not quantify its effect, suggesting further study.
Speaker: Nitin Bajaj
What performance thresholds (e.g., prompts per second) determine when CPU‑only solutions are sufficient versus needing GPUs?
Nitin gave an illustrative example of prompt rates, implying a need for systematic performance benchmarks.
Speaker: Nitin Bajaj
What concrete metrics should be used to assess India’s AI progress over the next 3‑5 years?
Both offered high‑level visions of success but lacked specific measurable indicators, indicating a research gap.
Speaker: Vivek Kanneja, Nitin Bajaj
How effective is the Paramuthkarsh initiative in supporting startups and MSMEs with AI workloads?
Vivek mentioned the program but did not provide impact data, suggesting a need for evaluation of its outcomes.
Speaker: Vivek Kanneja
What best practices enable Indian data centers to achieve low PUE (e.g., 1.06) and how can they be replicated nationally?
Both discussed energy efficiency measures, pointing to a need for broader dissemination of low‑PUE strategies.
Speaker: Nitin Bajaj, Vivek Kanneja
What specific challenges prevent Indian enterprises from moving AI proofs‑of‑concept to production at scale, and how can they be mitigated?
Both identified obstacles such as data quality, MLOps expertise, and ROI uncertainty, indicating a need for detailed mitigation strategies.
Speaker: Vivek Kanneja, Nitin Bajaj
How can the ecosystem of silicon vendors, ISVs, and system integrators be coordinated to provide a clear, unified AI deployment formula for enterprises?
Nitin noted the lack of a single deployment formula, highlighting an area for ecosystem‑level research and standardisation.
Speaker: Nitin Bajaj

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Empowering India & the Global South Through AI Literacy

Empowering India & the Global South Through AI Literacy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, convened by the Central Square Foundation, emphasized the urgency of delivering AI literacy to every student, parent and teacher in India as AI becomes embedded in education tools [4-7]. Moderator Bhanu framed the discussion as moving beyond the question of whether AI belongs in classrooms to deciding what futures should be built with the technology already present in phones, homes and schools [14-19].


Dr Shabana explained that AI can transform education by personalising learning for students through customised lessons and feedback, thereby addressing high student-teacher ratios [30-36]; for teachers, AI acts as an assistant that streamlines lesson planning, diagnostics and assessment, improving productivity and pedagogy [40-44]; multilingual voice capabilities also extend quality content to underserved learners and involve parents more actively [45-47]; and aggregated data can help education bodies identify drop-outs and optimise resources [48-50]. She stressed that because all these stakeholders are already interacting with AI, a broad AI-literacy programme is essential for responsible use [50].


Tanushree described the AI Samarth initiative, which has reached millions of government-school children, citing the stories of Shraddha and Poonam whose curiosity turned into confidence as they used AI to verify concepts, generate project ideas and learn ethical considerations [74-85]; the programme reports engagement with roughly nine million students and demonstrates gains in language support, gap reduction and ethics teaching [81-85].


Chitra added that teachers’ readiness varies along a hope-fear spectrum, with awareness and skill gaps on one side and growing confidence through purposeful curriculum and safe training environments on the other [112-124][125-138]; she noted that while some educators over-rely on generative tools, the AI Samarth model promotes balanced use and cascading training that builds emotional safety and agency [140-146][150-154].


Ramya offered a concrete example from a remote Jharkhand school where a teacher felt she was falling behind the pace of AI adoption, but AI-literacy helped her catch up with her students [165].


Concluding, Shri Krishnan highlighted the broader goal of democratising AI through inclusive education policies-such as teaching AI from class three and across all university disciplines-and urged continued experimentation to ensure no learner is left behind [228-244][245-249].


The participants collectively agreed that AI is already integral to learning ecosystems, that comprehensive AI literacy is needed for students, teachers, parents and administrators, and that building confidence while managing fear are key to harnessing AI’s potential responsibly [14-19][30-36][112-124].


They also stressed that AI should complement rather than replace teachers, serving as a tool that enhances instructional quality while preserving the educator’s role [39-44].


Critical thinking was highlighted as a safeguard, with the curriculum encouraging learners to verify AI outputs against reliable sources and to use AI as a reviewer rather than a sole answer provider [193-194].


Scaling AI literacy faces infrastructural challenges, such as the fact that 40 % of Indian classrooms still lack computer labs, making equitable access a pressing concern [103-104].


Nonetheless, the panel viewed AI-literacy programmes like the AI Summit and AI Samarth as early but promising steps toward leveling the educational playing field across the Global South [6][81-85].


Keypoints

Major discussion points


AI literacy is essential for every education stakeholder – AI can personalize learning for students, assist teachers in lesson-planning and assessment, and lower barriers to quality education for underserved communities, while education bodies can use AI-driven data insights for planning and risk-identification. [30-38][39-47][48-50]


Ground-level programs (e.g., AI Samarth) show the “curiosity-to-confidence” journey – real-world stories from government-school students illustrate how AI moves from a novelty to a trusted learning companion, improves language access, narrows learning gaps, and introduces ethics and bias awareness. [70-86][82-86]


Teacher preparedness hinges on a hope-vs-fear spectrum and confidence-building – teachers’ awareness, skill-sets, and mindset range from apprehension to enthusiastic adoption; purposeful training that creates an emotionally safe environment helps them move toward balanced, confident use of AI tools. [115-124][126-135][140-147]


A structured AI-literacy curriculum is needed to embed foundational concepts and critical thinking – the AI Samarth curriculum focuses on four pillars (what AI is & its applications, technical basics, societal/environmental impacts, and effective prompting) and stresses verification of AI outputs to nurture critical judgment. [172-190][193-194]


Policy and a broader inclusive vision are required for scaling AI literacy – national policy to teach AI from class 3, interdisciplinary AI education across all disciplines, and the democratizing intent of the summit are highlighted as ways to ensure no one is left behind. [229-247]


Overall purpose / goal of the discussion


The panel aimed to chart a collective roadmap for building “futures through AI literacy” in India and the Global South: defining why AI literacy matters, showcasing pilot implementations, outlining curriculum design, and urging systemic support (teacher training, policy, equitable access) so that all learners become active contributors rather than passive consumers of AI.


Overall tone and its evolution


– The conversation opens with a formal, forward-looking tone, emphasizing the strategic importance of AI literacy.


– It shifts to a hopeful, anecdotal tone as panelists share concrete student stories and positive impacts on learning.


– Mid-discussion, a cautious, reflective tone emerges, acknowledging teachers’ fears, the need for critical thinking, and the risk of over-reliance on AI.


– The closing remarks adopt an inclusive, rallying tone, stressing democratization, policy backing, and a call to action for the broader education community.


Overall, the tone moves from optimism through realistic caution back to an inspiring call for collective action.


Speakers

Speaker 1


Area of expertise / role: Representative of the Central Square Foundation (CSF); focuses on AI literacy in school education in India.


Title: (not specified)


Dr. Shabana Bhattacharya


Area of expertise / role: Senior Project Scientist, Vardhani School of Data Science and AI, IIT Madras; works on AI-enabled ed-tech, curriculum design, and AI literacy.


Title: Dr.


Ramya Venkataraman


Area of expertise / role: Education practice leader, formerly at McKinsey; founder of CENTA, works on teacher engagement and AI integration across 100+ countries.


Title: (not specified)


Citation: [S7]


Shri S. Krishnan


Area of expertise / role: Government official overseeing technology policy; focuses on scaling technology in education.


Title: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India.


Citation: [S10]


Chitra Ravi


Area of expertise / role: Founder of Chrysalis (over 25 years in education); leads AI-literacy teacher-training and curriculum development for government and low-fee private schools.


Title: (founder)


Tanushree Narain Sharma


Area of expertise / role: Co-founder and CEO of Transform Schools; leads the AI Samarth program delivering AI literacy to government schools.


Title: CEO


Bhanu Potta


Area of expertise / role: Moderator of the panel discussion; facilitates dialogue on AI literacy for India and the Global South.


Title: (not specified)


Additional speakers:


Gauri – Mentioned in the opening remarks (“Thank you Gauri”), likely a co-moderator or organizer, but no further details provided.


Krishnan ji – Same person as Shri S. Krishnan (already listed).


Full session reportComprehensive analysis and detailed insights

The panel opened with Speaker 1 (unnamed in the transcript) thanking the NDIA summit and introducing the Central Square Foundation (CSF) as a philanthropy focused on school-education and, increasingly, on ed-tech. CSF argues that the rapid infusion of artificial intelligence (AI) into educational tools makes it imperative to provide AI literacy to every student, parent and child so that they become active contributors rather than passive recipients [4-6]. To this end CSF has launched a large-scale programme called the AI Summit, the first curriculum-based AI-literacy effort built in partnership with the Wadwani School of AI [6-7].


Moderator Bhanu Potta framed the discussion as moving beyond the question of whether AI belongs in classrooms to deciding what futures should be built with a technology that is already “in our phones, in our homes, in our classrooms” [15-17]. She noted that previous panels had “a lot of men and very few women” and highlighted the significance of this women-only panel [15-17]. By stating that the debate about AI’s presence in schools has “already passed,” she set a pragmatic tone for the ensuing conversation, focusing on how AI can be responsibly leveraged for students, teachers, parents and education systems [18-20].


Dr Shabana Bhattacharya, senior project scientist, elaborated on the transformative bets that AI can place on education. For learners, she identified personalisation as the chief value-add: AI can generate customised lessons, suggest tailored learning pathways, deliver individualised assessments and provide feedback that directly addresses each student’s conceptual gaps [30-36]. She linked this capability to the chronic high student-teacher ratios in India and the Global South, arguing that AI can help bridge that gap [32-34]. Regarding teachers, she stressed that AI will act as an assistant rather than a replacement, improving productivity through AI-driven lesson-plan creation, diagnostic testing and automated assessment generation [40-44]. Multilingual and voice-based AI lowers barriers for underserved learners and enables parents to engage more meaningfully with their children’s learning [45-47]. At the system level, AI-enabled data aggregation can produce actionable insights-such as dropout-risk identification and resource-planning optimisation-for education bodies [48-50]. Across all stakeholders, speakers agreed AI will augment rather than replace human roles [40-44][45-47][236-237].


Tanushree Narain Sharma described the AI Samarth initiative, which has reached roughly nine million government-school children [81-86]. She recounted the stories of two ninth-grade pupils from Odisha-Shraddha, who now uses AI as a “companion” to cross-check difficult topics and seek teacher clarification [74-78]; and Poonam, who moved from viewing AI as a buzzword to using it for project ideas and preparation [80-84]. These narratives illustrate the “curiosity-to-confidence” journey that the programme seeks to foster, delivering language support, narrowing learning gaps and embedding ethics and bias awareness into the curriculum [81-86].


Ramya Venkataraman offered a concrete example from a remote Jharkhand school, where a teacher initially feared being outpaced by her students’ AI use but, after receiving AI-literacy training, felt able to “catch up” [165].


Chitra Ravi expanded the discussion to teacher preparedness, describing a spectrum that ranges from fear and resistance to hope and enthusiasm [112-124]. She distinguished two dimensions of preparedness: awareness of AI concepts and tools, and the sentiment or mindset toward AI [113-118]. While some educators remain wary, many are “dabbling” with AI on platforms such as WhatsApp, and AI Samarth has helped them acquire the necessary skill-sets [116-119]. She warned of a potential “over-utilisation” trap, where teachers generate lesson plans with large language models (LLMs) without understanding how to validate them [143-145]. Nevertheless, the curriculum’s relevance to teachers’ daily work, combined with emotionally safe training environments, has fostered confidence and purposeful integration [136-138][140-147]. The cascading model-where trained teachers subsequently train peers and students-has created a “politeness” effect, with LLM interactions prompting more courteous classroom language [198-210]. Chitra noted that AI Samarth’s reported reach is close to 200,000 students [112-124], highlighting a scale discrepancy with the nine-million figure cited by Tanushree. Overall, she concluded that AI literacy has turned teachers into “agents of change” who can responsibly harness AI while maintaining professional judgement [215-222].


In response to Bhanu’s request for design guidance, Dr Bhattacharya outlined the four-pillar AI Samarth curriculum. The first pillar introduces what AI is and its everyday applications; the second covers technical fundamentals such as data, computer vision and natural-language processing; the third examines societal and environmental impacts, including bias, fairness and the carbon footprint of large-scale AI computation; and the fourth teaches practical interaction skills, notably effective prompting for generative tools [172-190]. To safeguard critical thinking, the curriculum embeds a “review-and-improve” loop: learners first attempt a solution independently, then use AI to critique and refine it, thereby encouraging verification against reliable sources [193-196][194-195]. This approach directly addresses the over-reliance risk highlighted by Chitra.


In his closing remarks, Shri S. Krishnan, introduced by Bhanu as “Shri Krishnan ji from MIT,” positioned AI literacy within a broader policy and societal context. He praised the summit for “democratising AI” and stressed that inclusive education requires people to acquire the competencies needed to harness technology responsibly [228-232]. Krishnan noted that India’s government has already mandated AI teaching from class three onward and urged that AI be integrated across all university disciplines, not solely within computer-science programmes [242-245]. He framed technology as a multiplier that can augment teachers and resources rather than replace them, while also calling for experimentation and careful evaluation of AI’s impact [236-237][243-244]. He invited participants to explore the AI/technology expo for concrete examples and reiterated the summit’s core message of inclusion-ensuring that no learner or community is left behind [229-232][247-249].


Collectively, the panel reached strong consensus on several fronts. All speakers agreed that universal AI literacy is essential for students, parents, teachers and education bodies [30-36][18-20], and that AI is already embedded in everyday life, shifting the conversation from “if” to “how” [4][14-19][170-189][229-244]. They concurred that AI can personalise learning, act as a teaching assistant, and lower equity barriers through multilingual capabilities [30-37][45-47][172-190]. The importance of building teacher confidence along a hope-versus-fear continuum, providing emotionally safe training, and preventing over-reliance were repeatedly highlighted [112-124][140-147][143-145]. Embedding ethics, bias awareness and critical-thinking practices into curricula was identified as non-negotiable [85-86][193-196]. Finally, the need for policy support-such as early-grade AI instruction and cross-disciplinary integration-was underscored as a prerequisite for scaling these initiatives across the Global South [229-244][245-247].


In summary, the discussion mapped a roadmap for “building futures through AI literacy” that combines (i) a structured, four-pillar curriculum; (ii) large-scale pilots like AI Samarth that demonstrate the curiosity-to-confidence trajectory; (iii) teacher-centred confidence-building programmes that navigate the hope-versus-fear spectrum; and (iv) national policy commitments to embed AI education from the earliest grades and across all disciplines. By aligning curriculum design, teacher training, ethical safeguards and inclusive policy, the panel argued that India and the wider Global South can transform AI from a ubiquitous tool into a catalyst for equitable, high-quality education. The next steps include scaling AI Samarth to its nine-million-student target, expanding the curriculum through partnerships such as CSF-Wadwani, cascading teacher-training models, and monitoring AI-driven data insights to inform resource planning and dropout-prevention strategies. Addressing unresolved challenges-such as age-appropriate metaphors for complex concepts, systematic validation of AI-generated content, sustainable infrastructure funding and long-term governance of AI ethics-will be critical to realising the panel’s inclusive vision.


Session transcriptComplete transcript of the session
Speaker 1

Good evening everyone it is indeed a pleasure to be here and thank you to the NDIA mission and the NDIA summit for acknowledging this as an important topic to include within the panel discussions scheduled during the NDIA summit I also would like to thank the panel for the summit, so maybe we should just close with a bang, and I see Bhanu sitting with five women, so good luck to them. But I am here on behalf of the Central Square Foundation. Central Square Foundation is a philanthropy working in school education in India, and one of the focus areas for us is edtech. And now that edtech is getting powered with AI, it becomes very important for us to ensure that we impart AI literacy to every student, parent, and child in India, so they are able to become active contributors to AI rather than just being passive recipients.

And that is the fundamental premise of this discussion. CSF has been working on a program, a large -scale program on AI literacy called AI Summit, and this is the first program with a curriculum for AI literacy that has been built in partnership with the Wadwan, the Wadwani School of AI, which is also represented on this panel. So with that I will hand over to Bhanu to take the proceedings forward. Thank you.

Bhanu Potta

Thank you Gauri. Good evening everybody. Like Gauri said we are in the last few panels of the summit and I have the distinguished privilege of moderating four stalwart women in the education space. I don’t think there’s any other, all other panels were the other way around where there were a lot of men and very few women. So it’s my privilege to moderate all of you. I will try to keep it as collogical as possible but you can also trouble yourself between with questions and stuff. So today’s dialogue is really about building futures through AI literacy for India and the Global South. AI is already in our hands. It’s in our phones, it is in our homes, it is in our classrooms whether we like it or not.

The conversation today is really about what features, futures do we want to build of the… tool which is coming into our hands, right? I think the discussion about should AI be in the classroom or should AI be a learning tool is already passed. It is going to be there and it is for us to figure out how to use it productively, right? So with that as a backdrop, we will move forward into the conversation and quickly talk about each one of you here. Shabana, senior project scientist from the Vardhani School of Data Science and AI, IIT Madras. Ramya, we all know Ramya, CENTA and a lot of other things as well before that. Tanushree, co -founder and CEO of Transform Schools and Chitra, founder of Chrysalis, right?

I would actually start the conversation with Shabana putting you on the spot, right? So your journey really in computer science really started with the trigger with your 11th grade computer science teacher. part in you in the classroom, right? In a rural school in the backwaters of Kerala, right? Nice place to be there. But from that time to now the role which you play as a post -PhD fellow in IT Madras, working in the ed tech space with all the things you do in AI, both on the usage side, design side, and the regulatory side, I would really like to ask you what, according to you, are the transformative bets AI can bring in

Dr. Shabana Bhattacharya

Okay. So if you want to analyze the transformative bets, the major transformation that AI can bring into the classroom, I feel that we should look at the impact that AI has on the key stakeholders in education, namely the students, the teachers, parents, and the education bodies and organizations at large. So if you look at learners, I think the biggest value add is in terms of personalization, right? Because research has already shown that one -on -one human tutoring results in better learning gains as compared to a traditional classroom -based learning. But we know that in India and also across the global south, the student -teacher ratios are quite large, which actually prevents personalization from happening. So this was the case before, but with AI in picture, this can change, and in fact this is changing.

Because AI can help students with personalized learning in terms of customized lessons that is tailored according to the student’s specific conceptual gaps or misconceptions. Or AI can also suggest tailored learning pathways. It can also provide individualized assessments and also tailored feedback pertaining to the performance of the students in the assessments. So that way AI can play a big role in adding quality to the learning process of the students, which definitely helps in bettering. So this was the case before, but with AI in picture, So this was the case before, but with AI in picture, this can change, and in fact this is changing. this can change, and in fact this is changing. this can change, and in fact this is changing.

Now coming to teachers, again, so we always say that, you know, AI is not going to replace teachers, but it is going to be a better assistant. Its role is going to be an assistant, which can help in better productivity, teaching productivity, as well as the quality of pedagogy is what it brings onto the table. So, for example, teachers can use AI -based tools to create better lesson plans, to run diagnostic tests that can help inform the misconceptions or learning gaps for students in the class and tailor instructions based on that. It can help in, you know, kind of producing required kind of assessments and also even evaluating them. So that way, the value add is in terms of improved productivity and also informed pedagogy.

And so another significant transformation that AI brings is that it lowers the barrier to quality education, especially for the underserved classes. So with the voice -based capabilities and the multilingual. capabilities of AI. Now, quality learning content is available in regional languages to learners, which in fact, you know, not only affects the teachers and learners, but it also affects the parents, in the sense that now parents can have a more engaged contribution to their child’s learning. So that way, it opens a lot of opportunities. And now coming to educational bodies or organizations at large, so we know that data is collected across multiple points. Including assessments, attendance systems, and program implementations, etc. Now with AI, it is possible to collect all of these data points, combine and analyze them to give actionable insights, such as, you know, identifying risks of dropouts, identifying high risk students, perform better resource planning, etc.

So that way, AI is already transforming education in a big way. And it is going to be, it has a lot of potential to, you know, play a greater role in education as a whole. now so this kind of emphasizes the need for AI literacy because you know AI is already a part of classroom and all the stakeholders are kind of already interacting with AI so there is the need for the for everybody right not just the students or teachers but even parents and generally for people to have a better understanding of the AI concepts how these tools work in order to use them responsibly

Bhanu Potta

thank you so I hear three things you said stakeholders student teacher and maybe even the parent like a teacher and the second thing you said is that getting AI to the last child in our population which is equity and accessibility the third thing which you said is for all the use cases which you talked about to be utilized there has to be AI education in the form of where it is right that brings us to the stakeholders, right? So Tanushree, you across the last seven plus years have impacted over 30 million students, right? So my question to you is, from the government school systems point of view, and when you go on the ground to students, right, especially the government schools primarily, right?

How are they responding to AI in and around them in the classrooms, be it at homes or wherever they’re interacting? What are you seeing on the ground?

Tanushree Narain Sharma

Thanks. Thanks for that question. And thank you for inviting transfer schools on this panel. So I think in past seven years, with our experience of working with the government schools, and as an organization, we work on improving learning and life outcomes. But with the program, which is AI Samarth, we have seen one emerging thing, that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child and learn.

And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have And I have two great examples to share from our home state, which is Odisha. It’s from the class nine. They’re both from the class nine grades.

One, there’s a girl whose name is Shraddha. And so she, when we asked her that, you know, how’s her experience in a class where she’s getting these lessons from AI, Samarth. So she said that I used to use AI tool as something, you know, just in a fun way. But now I’m able to cross check my difficult subjects or the topics. And if I’m still facing any difficulties, I’m able to go back to my teachers and cross check again. So she’s using AI as a companion as well. So what I see here is that curiosity is converging. Converted into confidence. That is one big thing. There’s another child whose name is Poonam. she said that for us when we saw AI around us everybody was the AI was a buzzword you know we were everybody was talking about it but it was for all entertainment because we were saying that okay you can you can gamify it you can make a cartoon movie about it all of that but when I see AI now is that I can get my project ideas from it I can actually you know do my preparation from it so and it is very useful to see that because they are the first generation learners in a government school so it’s it’s a great I would say pleasure to see that happening in a government school and I mean AI with AI Samarth we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are talking about point talking about point talking about point nine million students in all and what we nine million students in all and what we nine million students in all and what we have seen as a pattern overall is three things.

One, which I mentioned earlier, is curiosity converting into confidence. Second, that AI is able to support learning, sorry, language, improving into languages. And second, reducing the learning gaps students have. And third is, which is the most important thing which AI Samarth does, is that teaching them about the ethics, the biases, and therefore it’s a responsible engagement. So that’s the

Bhanu Potta

Thanks, Tanushree. It’s lovely that you talk about the story of Shraddha and Poonam.

Tanushree Narain Sharma

Thank you.

Bhanu Potta

the journey from curiosity to confidence. And along the journey, being able to use digital tools in the form of AI and exercise their agency as a learner and not lose it.

Tanushree Narain Sharma

That’s correct.

Bhanu Potta

So that’s a beautiful story. And I think, you know, 0 .9 million Shraddhas and Poonams.

Tanushree Narain Sharma

Yes.

Bhanu Potta

Thank you. And onward, upward and onward. And more than that, right? So that moves me to another dimension, right? So, Chitra, you’ve been, you founded Chrysalis over 25 years back. And you founded it out of the pain of being a mother of two daughters and their disillusionment in the school system at that point in time.

Chitra Ravi

My disillusionment.

Bhanu Potta

Your disillusionment. They were too young to be disillusioned. And the interesting connection here is that you actually started off as a full stack ICT curriculum embed provider from grade 1 to grade 12 for CBN. and you have seen that whole wave, right? And we still have over 40 % of our classrooms which still don’t have computer labs. And now you’re at the start of another wave with AI, right? So I think you’re looking at the second wave in your own journey, Chrysalis, you and your team and other people working with you, right? So my question to you is how are teachers in the government schools and the low -fee private schools in particular, not the middle and upper, but the low -fee private schools, today, you know, transacting around integrating AI into their life, work, and fun?

Chitra Ravi

Yeah, thank you, Bhanu. That is a very important question to be thinking about. And I can speak from a pre -AI Samarth and a post -AI Samarth point of view. So very grateful to CSF and everybody who’s, you know, really pushed us into this wave with a very… Very, very… purposeful and meaningful work, and it’s giving us a lot of insights into this community of teachers and students. So if you look at preparedness when we entered, and I think we are now close to 200 ,000 students that we will be, you know, who would have been exposed to AI literacy, we are talking about the preparedness being, I look at preparedness from two points of view. One is awareness about AI, the skill sets, or whatever it means to handle a tool.

And the other is, I think, the sentiment and mindset that every teacher or every student has, right? So from an awareness point of view, again, the spectrum is, I don’t even, I just hear this buzzword, and I don’t even want to go near it on one side. And on the other side, yeah, I’m dabbling with it all the time on my WhatsApp. I know it. And then there’s always this confusion between the thin line that AI and, you know, an algorithmic. provides. So I think AI Samad has really empowered them on certain skill sets that has gotten them to understand what is AI and how I can be putting that to use. That’s from a skill set point of view.

But for me, I think more interesting is the sentiment and mindset point of view. If you look at that, it’s a spectrum between hope and fear. And hope leading to over, you know, stimulation, right? And fear, resistance and not really getting there. So I think that’s a very important spectrum. And I think a program like AI literacy and Samad kind of brings in an equilibrium. I think that’s very important because, and I completely, you know, I can relate to what Tanushree was saying, we see so many children and so many teachers who are now us, hey, you know what? I thought everything was bad about AI. Like I was so fearful. I thought it’s going to replace me as a teacher.

I now understand that if I hold the agency and I know what is what, and it’s not rocket science. I think that’s the first demystifying thing, right? I mean, it’s not anything that I cannot be conversant with. And we are talking about government teachers, affordable private school teachers who have found a confidence like she was mentioning. And that has made them approach AI in a very purposeful way, right? They are integrating it. I think the way we built the curriculum, thanks to the Wadwani school and CSF, and we’ve also made some meaningful contribution to that curriculum and content development. We are seeing that the use cases we build in, there is a very, very high relevance to what the teacher is doing, which is very important.

It’s not just literacy about what is what. but how it can be purposefully used by the teacher. So that’s a big winner for us. The second, when it comes to sentiments, we’ve heard children, actually children and teachers in particular, tell us that, you know, I had a negative approach and today I walk in with so much positivity. I know this can be engaging. This can be purposefully done. And even that over, I don’t know what to call it, but I think over -utilization, you know, where they would just generate lesson plans because somebody asked the coordinator or the, you know, educational officer asked them. They just go and dabble with the LLM, CHAP, GPTA and then churn out lesson plans, not even knowing what to validate, how to validate.

And that has also been kind of equalized. So I think it’s a beautiful, in my opinion, an equalizer. And of course, over a period of time, it will become a leveler. I look at it from that point of view. And there’s much more. work to be done, but I think it’s a beautiful start where there are so many positive stories in the field that’s giving us a lot of hope. I think even for players like us, you know, organizations like us, I wouldn’t say we were not in that spectrum between hope and fear. It’s not only about the teachers. We are all in this phase of, you know, is it going to be the thing tomorrow or is it going to be, you know, what is our position in this world?

And I think this is kind of now getting to some equilibrium. That’s what I would say.

Bhanu Potta

What I hear you say is that, you know, tens of thousands of teachers are finding their balance in the classroom with confidence and agency, right? And I think that’s a good state to be in. And we have a lot more work to do. India is a big country and Global South is even bigger, right? That brings me to Tanushree. Sorry, Ramya. So Ramya, your journey really started off with the time you spent in McKinsey and you are actually kind of bootstrap the education practice for this region in McKinsey. And then from that horizontal view, you actually became a falcon and then you dove into teachers and the problem of teachers and there comes CENTA, right?

So over the past few years, like 11 years in CENTA, from across 100 countries, 100 plus countries, teachers have engaged with you. And now you’re at a place where you’re seeing AI come into the mix, right? So I would ask you to kind of be that falcon, fly back out of India and look at the global south and kind of talk about what are you seeing, what are you hearing from teachers in AI, not just from India and what you’re doing with AI but also in other countries where there isn’t any such thing happening.

Ramya Venkataraman

I mean like in a remote part of Jharkhand we’ve had a teacher saying that I was worried that my students are getting into this world faster than I am and now with this literacy I’m able to catch up with my students which is a very nice thing of her to say but those are some of the perspectives to start with I don’t know if I answered because I was not sure what I was supposed to do as a falcon coming out

Bhanu Potta

Thank you, thank you Ramya I think good conversations, three things to pick right, one is the stakeholders being touched and their journey from curiosity to confidence for some actors and then for a lot of other actors finding the balance between the hope and fear spectrum, right, so I would like to kind of take the conversation a little bit down into the implementation and design of all of what we’re working, right, I mean that’s where the rubber really meets the road right, so I would go back to Dr. Shabana and Shabana you have been a part of the curriculum design and the curriculum review and the endless debates of how should we help a rural school student in Orissa understand computer vision and what metaphor would we use that and those are fascinating conversations we had in those times.

But now looking back at the journey and looking back at what we’re hearing from the field, not just ASM, but there are a lot of other literacy programs happening. But I would really like you to kind of, you know, focus a little bit on moving towards what is it that you would like to see in the design, which kind of ensures ethical judgment and critical thinking among all the stakeholders, the child as well as the teacher.

Dr. Shabana Bhattacharya

Dr. Shabana Bhattacharya Okay. So first, I think we should start with the child. Dr. Shabana Bhattacharya Yes. so when we look at the available AI literacy curriculums around the world we see that many of them deal with foundational AI literacy along with AI skilling but AI skilling is more about how to practically use AI which is more role or context dependent so for example AI skill for a software engineer could be different from what AI skilling would be for a teacher but AI literacy refers to the foundational set of AI concepts that is kind of universal that is something that everybody should know about so as a part of the AI Samad curriculum so we collaborated together the Badwani school and CSF collaborated together to come up with a wonderful curriculum which I feel is like very grounded and it kind of focuses on the key AI foundational concepts that everybody should be aware of and later you could build on top of it to do further skilling which is more tailored towards each role So to talk about the AI Samarth curriculum, we have four key pillars.

The first being understanding what is AI and what are the applications of AI, like looking at, you know, the everyday applications and identifying the AI component, because we all interact with AI in some form or the other without realizing that, you know, it is a form of intelligence that we are interacting with. So understanding that is very important. Understanding what is AI is important. And, you know, also, you know, kind of having an awareness about some of the key technical aspects, like what is data? What is the role of data in training an AI system? And, you know, about vision, about NLP. So these are the basic AI concepts, the technical concepts that, you know, students and teachers and universally everybody should be aware of.

And then once we have an idea about the software. The role of data in training AI systems, that naturally leads us to understand about the societal impacts, the environmental impacts, et cetera. So, for example, so we. we know about the issues of bias and fairness in AI systems, right? So we know that, okay, an AI system is trained based on the data. So depending upon the kind of data that was used to train the systems, we can have, you know, these issues. And these arise whenever the systems are used in practice. So the third pillar is definitely about the societal impact, the environmental impacts, what happens when we require the computations at scale. And finally, about how to practically interact with these systems, right?

How to write effective prompts, because we all interact with, almost all of us interact with generative AI tools such as the chat GPT, right? So then how to actually frame an effective prompt to get the desired result. So that is also very important. So I feel that these kind of cover the basic fundamental AI concepts that is universal, that everybody should be aware of,

Bhanu Potta

So just one more. One minute on. how do we in all of this safeguard critical thinking

Dr. Shabana Bhattacharya

okay so coming to critical thinking so so when we know that an AI system is trained using data we know that you know the kind of research that we would get on a query would depend on the kind of data that the system has seen so which would make a student or a teacher or any user you know be vigilant about using the AI the the results from the AI tools directly so we have to be careful about double -checking cross -checking with valid sources and not trusting the result you know directly and also coming to the critical thinking part it is more about you know not over relying on AI tools and using your judgment to kind of validate the results to verify like I said with verified sources whether the results are you know valid or not so which is also important

Bhanu Potta

I think just correcting all the results is a very important thing to do and I think on that there was also a lot of exercise which was done across the curriculum to sprinkle various good ways of using AI where we basically kind of said it’s not about asking the AI for the answer but actually doing your own exercise and submitting your answer and then asking AI to review and improve which is that critical thinking loop which we reinforced across multiple places. So I’d move forward a little bit to Chitra now. So Chitra, I would like to take you down into a teacher training scenario where your team has trained a lot of teachers on AI literacy via the curriculum and then they were cascading forward, right?

So at that level, right, what is it that you and your team have done in terms of building confidence for the teachers who are in the training program to carry that back into their classrooms and deliver it to the children?

Chitra Ravi

I think we… always believed that teacher training is more about building an emotionally safe environment whatever topic we are training teachers on and with AI the hope and fear spectrum, this becomes even more relevant. So I think both in the curriculum the way we’ve done it and in the way we facilitate and train the teachers

Bhanu Potta

Namaste Krishnan ji Pleasure to have you here We welcome Shri Krishnan ji from MIT Thank you for joining us today sir

Chitra Ravi

Chitra So I think we definitely need to look at how the confidence is built. In a light hearted way I also want to say a lot of teachers have now started saying we are becoming more polite and classroom Any other questions? idea why? It’s nothing to do with the training but to do with LLMs. Because chat GPT always tells them, hey, you’ve asked a brilliant question. So that is becoming contagious. And the teachers are today like, you know, we’ve seen that happen. So that is why I wanted to bring that was there is a lot of role modeling that happens when there is a cascading, right? And I’m simply saying an LLM cascading also is bringing that politeness.

I hope we don’t become artificially polite, but then I’m hoping that some of these things rubs off in the language of teachers. So I think that hope and confidence is what trainers are modeling. And that I think is being carried on by the teachers when they do the cascading to those students. And one other thing in the literacy, I mean the Samarth model, I feel is the capacity we are building in the teachers is actually strong because there is the responsibility of the teacher to go back and train the teacher. And see, they are all teachers. You know, when we analyze why teachers resist training, it’s because they think that they are the deliverers and then to receive becomes difficult for us.

So I think I love this AI Samarth model, the literacy model we have built, thanks to CSF and, you know, the whole think tank there, is that yes, I’m going to learn and some of us are actually going to translate it to the students. And so that rub off is really, really working well. So I think the preparedness or I would say the teachers are very gently handling this whole thing because for them it is a new topic. It’s not like I’ve done math teaching for 25 years and I’ve found all my children getting into IIT and all of that. This is a new topic, this is a new skill, and they are very, very careful when they are doing it.

So there’s a lot of deep respect that they have in terms of translating this into their own training, Banu. So I think in many ways this has been an eye -opener into how usual teacher training and how this AI literacy teacher training is bringing very new insights into how teacher training can happen. So I think that’s very, very critical.

Bhanu Potta

Thank you, Chitra. We would now request Shri Krishnanji to kind of address us, Secretary Mighty.

Shri S. Krishnan

I’m sorry to literally photobomb this session. It was not my intention. Ram is a very old friend. And when she asked if I could stop by, I just thought I would. Primarily, more than anything else, to thank all of you for participating. in probably a historic summit, one that finally democratized AI. In that sense, I think what we have managed to do is to actually bring people into the room. And more than even people, we have managed to bring people’s concerns into the room. And I think it’s very fitting that we are talking about education today, which is so critical in every sense of the word. Partly because I think if we have to achieve anything we need to achieve, we need people to do it and people who have the skills and who have the abilities and who have the competencies to actually do what needs to be done with technology and make sure that it works for them and make sure also that they’re able to participate and they’re able to actually take advantage of it.

The other part of it is, of course, while we are making them capable, to what extent can we use technology? I think it’s nobody’s case that we have enough teachers or teachers’ assistants or we have the kind of technology that we need. We need the kind of resources. we want for education. If technology helps us to multiply that rather than substitute it and if technology helps us to actually enable our teachers to deliver better, nothing like it. And I think those are the kind of applications that we seriously need to look for. The Expo is replete with those examples. Incidentally it’s open tomorrow and it’s extended up to 8pm today so if you haven’t gone and taken a look at it, let me make a pitch.

Please do go take a look. I think a huge number of social applications of what technology can do and how it can work for people. I think all of that is something which would interest many of you but more than anything else I’m not an expert on education and I would be the last one to suggest that we should, I mean this is a silver bullet and it will solve all the problems. It may, it may not. But I think we need to sort of experiment. with it but more than anything else I think one thing I do know that students of all ages and students in all disciplines need to be aware of the potential of the technology and see what it can do for them and I that in itself I think would be significant the government of India has already made a policy call that they would teach about AI from class three onwards and I’m sure that many states would also do the same thing and that would form part of the curriculum and likewise in practically every institute or every university I think AI needs to be taught across all disciplines I mean not necessarily only to the computer science people in the IITs but to just about everybody because again the jobs in AI are not just of the guys who build the models I mean those are the nerdy PhDs there are probably about 300 of them across the world who’ll do it the rest of us are not going to do that the rest of us are not going to be Sam Altman or Peria Amadei or Dennis none of those I mean, all of them we saw yesterday.

The rest of us are going to be more journeymen, more people who will actually figure out ways in which this technology will affect us so that people need to know, people need to understand, even in an area like art history, what is it that the technology can do to their own discipline and how they can sort of leverage it. So if the Central Scale Foundation and educationists and everybody focuses on this area and is able to actually train the next generation to do this, train the next generation to figure out what jobs can work for them, that’s going to make all the difference. I or none of us are in an age group where, I mean, Rom and I are probably about, we are contemporaries, so we are too gray and too old now for this to make a huge difference for us.

But this is a technology for the next generation, and they should not be lost in this. And that is something which, I think we need to keep a sense of what is going on. to the message and the whole message of this summit is inclusion and bringing everyone in and we have to also ensure through the education process that no one is left behind so thank you very much for having me and i wish you all the best again a plug for the expo try and take a look and thank you all for joining us here today thanks thank you you you

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Central Square Foundation (CSF) is a philanthropy focused on school‑education and increasingly on ed‑tech, and argues that rapid AI infusion makes it imperative to provide AI literacy to every student, parent and child so they become active contributors rather than passive recipients.”

Speaker 1 representing CSF explicitly discusses empowering India and the Global South through AI literacy, confirming CSF’s focus on school education, ed-tech and the need for widespread AI literacy [S3] and [S1].

Confirmedmedium

“Moderator Bhanu Potta highlighted that previous panels had many men and few women and emphasized the significance of this women‑only panel.”

The knowledge base notes that the panel was the only all-female panel at the symposium and that gender imbalance on panels has been recognised as an issue, confirming the moderator’s observation [S90] and [S89].

Confirmedhigh

“AI’s chief value‑add for learners is personalisation – generating customised lessons, tailored pathways, individualised assessments and feedback that address each student’s conceptual gaps.”

The AI-education literature cites AI’s potential to deliver personalised learning experiences to every student, supporting the claim about customised content and feedback [S22].

Confirmedhigh

“Personalised AI can help bridge chronic high student‑teacher ratios in India and the Global South.”

Moorosi’s remarks in a high-level session state that AI can address educational challenges by providing personalised learning, especially for overwhelmed teachers and in contexts with large class sizes in India and the Global South [S18].

External Sources (94)
S1
Empowering India & the Global South Through AI Literacy — 1327 words | 169 words per minute | Duration: 470 secondss Dr. Shabana Bhattacharya Okay. So first, I think we should s…
S2
https://dig.watch/event/india-ai-impact-summit-2026/empowering-india-the-global-south-through-ai-literacy — And so another significant transformation that AI brings is that it lowers the barrier to quality education, especially …
S3
Empowering India &amp; the Global South Through AI Literacy — Dr. Shabana Bhattacharya Okay. So first, I think we should start with the child. Dr. Shabana Bhattacharya Yes. so when …
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Empowering India &amp; the Global South Through AI Literacy — -Ramya Venkataraman: Associated with CENTA; previously worked at McKinsey where she bootstrapped the education practice …
S8
Empowering India & the Global South Through AI Literacy — Thank you Gauri. Good evening everybody. Like Gauri said we are in the last few panels of the summit and I have the dist…
S9
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology)
S10
Empowering India &amp; the Global South Through AI Literacy — -Shri S. Krishnan: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India
S11
Empowering India &amp; the Global South Through AI Literacy — – Dr. Shabana Bhattacharya- Chitra Ravi
S12
Empowering India & the Global South Through AI Literacy — Speakers:Chitra Ravi, Implicitly supported by other speakers Speakers:Dr. Shabana Bhattacharya, Chitra Ravi Speakers:D…
S13
Empowering India &amp; the Global South Through AI Literacy — – Dr. Shabana Bhattacharya- Tanushree Narain Sharma – Dr. Shabana Bhattacharya- Tanushree Narain Sharma- Bhanu Potta -…
S14
Empowering India & the Global South Through AI Literacy — Speakers:Dr. Shabana Bhattacharya, Tanushree Narain Sharma Speakers:Dr. Shabana Bhattacharya, Tanushree Narain Sharma, …
S15
Empowering India &amp; the Global South Through AI Literacy — – Bhanu Potta- Dr. Shabana Bhattacharya – Dr. Shabana Bhattacharya- Tanushree Narain Sharma- Bhanu Potta – Tanushree N…
S16
Empowering India & the Global South Through AI Literacy — Speakers:Bhanu Potta, Dr. Shabana Bhattacharya Speakers:Dr. Shabana Bhattacharya, Bhanu Potta Speakers:Dr. Shabana Bha…
S17
Responsible AI for Children Safe Playful and Empowering Learning — All right. Good morning, everybody. I’m Sadhna Pandey, and I’m the Chief of Education at UNICEF India. And it’s a pleasu…
S18
High Level Session 3: AI &amp; the Future of Work — Tomas Norvoll: Thank you, Jonathan, and good morning, everyone. I’m really glad to be here at Lillestrøm with you for th…
S19
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Another important aspect to consider is the regulation of data collection and usage in education. The ‘datafication’ of …
S20
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S21
Bottom-up AI and the right to be humanly imperfect | IGF 2023 — Jovan Kurbalija:Thank you. The AI charge EPT won’t reply in this way, you know, therefore it is really smart. Thank you….
S22
Education meets AI — In addition to the above topics, the significance of critical information and critical thinking in education was also di…
S23
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S24
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Piyush advocates for a shift from traditional learning-focused education to one that emphasizes creation and practical a…
S25
Fireside Chat The Future of AI & STEM Education in India — The discussion maintained an optimistic yet realistic tone throughout. It began with cautious acknowledgment of AI’s dis…
S26
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S27
Artificial General Intelligence and the Future of Responsible Governance — Satunas highlights that education and public critical‑thinking skills are as essential as compute investments for prepar…
S28
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S29
Artificial intelligence (AI) and cyber diplomacy — The conversation expanded to highlight the universal need for digital literacy and capacity building in AI, urging gover…
S30
Policy Network on Artificial Intelligence | IGF 2023 — In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, gove…
S31
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S32
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in Oslo. I wanted to come specifically to your quest…
S33
AI in schools: The reality is messier than the solutions — As the school year is in full swing, the issue of AI in schools and education keeps coming up everywhere. Teachers share…
S34
Empowering India & the Global South Through AI Literacy — The discussion acknowledged several ongoing challenges. The scale required to reach India’s vast educational system pres…
S35
IGF LAC Space — Daniela Cuspoca:Of course, Paula, good morning everyone. As Paula mentioned, my name is Daniela Cuspo, I am a researcher…
S36
Empowering India &amp; the Global South Through AI Literacy — And so another significant transformation that AI brings is that it lowers the barrier to quality education, especially …
S37
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — A human rights-based approach with community solutions is advocated Demands on policy exist without the building blocks…
S38
Education meets AI — Lastly, the analysis supports teaching critical thinking as a basic skill. It is agreed that students should learn how t…
S39
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S40
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — In conclusion, the analysis highlights the complexities and potential implications of technology use in various contexts…
S41
Can we test for trust? The verification challenge in AI — 4. **Terminology confusion** between technical and policy communities Anja Kaspersen discussed the role of technical pr…
S42
Lightning Talk #139 Including youth to the public discourse — Young people are exposed to sophisticated AI-generated content that appears authentic but is fabricated, including fake …
S43
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S44
AI 2.0 The Future of Learning in India — Despite optimistic visions, significant challenges remain. The infrastructure gap between urban and rural areas requires…
S45
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — While both speakers address infrastructure challenges, Avalos uses his technical difficulties as a real-time demonstrati…
S46
AI for Social Empowerment_ Driving Change and Inclusion — Education and Skills System Overhaul:Investment requires fundamental reimagining rather than incremental improvement. Cu…
S47
AI/Gen AI for the Global Goals — Jamila Bio Ibrahim: Yes, very, very interesting question. Integrating AI at government level, in the schools and curric…
S48
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Policy-driven incentives, such as vendor preference for demonstrating universal acceptance in governmental procurement, …
S49
Filtered data not enough, LLMs can still learn unsafe behaviours — Large language models (LLMs) caninherit behavioural traits from other models, even when trained on seemingly unrelated d…
S50
WS #219 Generative AI Llms in Content Moderation Rights Risks — Marlene Owizniak: And before I open it up to the floor, I just wanted to highlight a few of the key risks that we found,…
S51
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 152. In principle, the need for more collaboration is widely acknowledged. Overall, the responses to the corporate quest…
S52
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Despite the concerns raised, chat GPT emerges as a promising tool for learning. It has the potential to save time by gen…
S53
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S54
Empowering India & the Global South Through AI Literacy — Dr. Shabana argues that AI’s voice-based and multilingual features make quality learning content accessible in regional …
S55
High Level Session 3: AI &amp; the Future of Work — Education and Skills Development Moorosi argues that AI can address educational challenges by providing personalized le…
S56
Empowering India &amp; the Global South Through AI Literacy — I hope we don’t become artificially polite, but then I’m hoping that some of these things rubs off in the language of te…
S57
Safeguarding Children with Responsible AI — While most teachers recognize AI literacy as a fundamental skill that needs to be taught in schools, less than half feel…
S58
Responsible AI for Children Safe Playful and Empowering Learning — All right, colleagues, we need to come to a close because people need to move to the next session. We’re designing for s…
S59
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S61
Driving Enterprise Impact Through Scalable AI Adoption — Educational institutions need to adapt curricula to emphasize critical thinking, question-asking, and evaluation skills …
S62
What policy levers can bridge the AI divide? — Tatenda Annastacia Mavetera: I really want to thank you. Thank you IITU for giving us this opportunity. And Zimbabwe to …
S63
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S64
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — International Institute for Democracy and Electoral Assistance: President Karis, President Muizzu, thank you for chairi…
S65
High-level AI Standards panel — Sung Hwan Cho: Thank you. I think I fully agree with what Mr. Amandeep told us. I think the coordinated approach, coordi…
S66
Policy Network on Artificial Intelligence | IGF 2023 — In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, gove…
S67
Building Trusted AI at Scale – Keynote Anne Bouverot — Overall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and ap…
S68
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S69
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S70
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S71
Keynote-Ankur Vora — Overall Tone:The tone is consistently optimistic, inspirational, and mission-driven throughout. The speaker maintains a …
S72
Leveraging AI4All_ Pathways to Inclusion — The discussion maintained a consistently optimistic and solution-oriented tone throughout. Speakers acknowledged signifi…
S73
AI for Social Good Using Technology to Create Real-World Impact — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for AI…
S74
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S75
How Small AI Solutions Are Creating Big Social Change — The discussion maintained a consistently optimistic and collaborative tone throughout. Panelists demonstrated mutual res…
S76
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S77
Education meets AI — In conclusion, the analysis provides a comprehensive overview of the key points related to digital and AI skills in educ…
S78
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S79
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S80
Closing remarks – Charting the path forward — These key comments shaped the discussion by creating a progressive narrative arc that moved from foundational principles…
S81
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S82
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S83
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S84
Building Climate-Resilient Systems with AI — Very exciting sessions. I’ll just wait. Guys. So we are meeting for a tremendously important subject. And this has been …
S85
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S86
Pre 8: IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth — Anja Gengo: Yes, I am. Thank you. I hope you can hear me. First of all, thank you so much for such an interesting and ri…
S87
YCIG &amp; DTC: Future of Education and Work with advancing tech &amp; internet — Marko Paloski: Thank you, Denis. Yes, I want to point also one thing that we discussed in the youth coalition between us…
S88
Designing Indias Digital Future AI at the Core 6G at the Edge — Impact:This comment introduced a concrete technical challenge that grounded the discussion in practical realities. It le…
S89
WSIS 2018 – Moderated high-level policy session 13 — Ms Doreen Bogdan-Martin(Chief of Strategic Planning and Membership Department, International Telecommunications Union (I…
S90
Global Perspectives on Openness and Trust in AI — Kak points out that their panel is the only all-female panel at the symposium, which she sees as both exceptional and pr…
S91
The WSIS welcome Part I: Meet the Movers Behind It — Noteworthy observations from the session included an acknowledgment of the gender imbalance on the panel, which was reco…
S92
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But whether we ‘need’ it or not, it exists and students have access to it. The question becomes: how do we respond to th…
S93
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S94
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — So, yes please. That’s a very beautiful thought that comes and in a country like ours where diversity is huge we need to…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument28 words per minute227 words479 seconds
Argument 1
Universal AI literacy is essential for every student, parent, and child to become active contributors rather than passive recipients.
EXPLANATION
Speaker 1 argues that AI is now a core part of education and that all members of society – students, parents and children – must be equipped with AI knowledge so they can shape and use the technology rather than simply consume it. This forms the foundational premise of the panel discussion.
EVIDENCE
She states that “now that edtech is getting powered with AI, it becomes very important for us to ensure that we impart AI literacy to every student, parent, and child in India, so they are able to become active contributors to AI rather than just being passive recipients” [4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNICEF’s education chief emphasizes AI literacy for children and the summit repeatedly stresses the need for universal AI literacy across students, parents and children [S17] and [S1].
MAJOR DISCUSSION POINT
AI literacy vision
AGREED WITH
Bhanu Potta, Dr. Shabana Bhattacharya, Shri S. Krishnan
B
Bhanu Potta
2 arguments156 words per minute1470 words564 seconds
Argument 1
AI is already embedded in phones, homes, and classrooms; the discussion must focus on shaping the futures we want with this tool.
EXPLANATION
Bhanu emphasizes that AI is already present in everyday devices and learning environments, so the debate should move beyond whether AI belongs in schools to how we deliberately design its impact on future societies.
EVIDENCE
She notes that “AI is already in our hands. It’s in our phones, it is in our homes, it is in our classrooms whether we like it or not” and adds that the conversation is about “what features, futures do we want to build of the… tool which is coming into our hands” [15-18].
MAJOR DISCUSSION POINT
AI as pervasive tool
AGREED WITH
Dr. Shabana Bhattacharya
Argument 2
Moderators highlight the stakeholder journey from curiosity to confidence and the need for ongoing support.
EXPLANATION
Bhanu summarises the three key themes emerging from the panel – the range of stakeholders affected, the need for equitable access, and the requirement for AI education – and stresses that continuous guidance is essential for sustaining progress.
EVIDENCE
She recaps “three things you said stakeholders student teacher and maybe even the parent… equity and accessibility… AI education…” and later reiterates “what we hear you say is that tens of thousands of teachers are finding their balance… we have a lot more work to do” [51-55] and [155-158].
MAJOR DISCUSSION POINT
Stakeholder journey and support
AGREED WITH
Chitra Ravi, Dr. Shabana Bhattacharya
D
Dr. Shabana Bhattacharya
6 arguments169 words per minute1327 words470 seconds
Argument 1
AI enables personalized learning by delivering customized lessons, tailored pathways, and individualized feedback to address each student’s gaps.
EXPLANATION
She explains that AI can replicate the benefits of one‑on‑one tutoring by providing lessons and feedback that are adapted to each learner’s specific misconceptions, thereby improving learning outcomes.
EVIDENCE
She describes AI “help[ing] students with personalized learning in terms of customized lessons that is tailored according to the student’s specific conceptual gaps or misconceptions” and also mentions “tailored learning pathways” and “individualized assessments and also tailored feedback” [30-37].
MAJOR DISCUSSION POINT
Personalized learning
AGREED WITH
Chitra Ravi, Shri S. Krishnan
Argument 2
AI acts as a teaching assistant, improving lesson‑plan creation, diagnostic testing, assessment generation, and overall pedagogical quality.
EXPLANATION
According to Dr. Shabana, AI tools can support teachers by automating routine tasks such as lesson‑plan design, diagnostic tests that reveal learning gaps, and the creation and grading of assessments, leading to higher productivity and better pedagogy.
EVIDENCE
She notes that “teachers can use AI-based tools to create better lesson plans, to run diagnostic tests… It can help in producing required kind of assessments and also even evaluating them” [40-44].
MAJOR DISCUSSION POINT
AI as teaching assistant
Argument 3
Multilingual and voice‑based AI lowers barriers, bringing quality content to underserved learners and engaging parents in the learning process.
EXPLANATION
She points out that AI’s ability to operate in regional languages and via voice interfaces makes high‑quality educational content accessible to learners in remote or disadvantaged settings, and also enables parents to participate more actively in their children’s education.
EVIDENCE
She states that “with the voice-based capabilities and the multilingual capabilities of AI… quality learning content is available in regional languages to learners” and that this “also affects the parents, in the sense that now parents can have a more engaged contribution to their child’s learning” [45-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel members highlight AI’s voice-based and multilingual features that deliver quality content in regional languages and enable parent participation [S1] and [S3].
MAJOR DISCUSSION POINT
Equity through multilingual AI
AGREED WITH
Tanushree Narain Sharma, Chitra Ravi
Argument 4
Aggregated data from AI systems provides actionable insights for education bodies, such as dropout risk identification and resource planning.
EXPLANATION
She argues that AI can integrate data from assessments, attendance and program implementation to generate analytics that help education authorities detect at‑risk students, plan resources efficiently, and make evidence‑based decisions.
EVIDENCE
She explains that “with AI, it is possible to collect all of these data points, combine and analyze them to give actionable insights, such as identifying risks of dropouts, identifying high risk students, perform better resource planning” [48-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
While the panel promotes data-driven education, IGF discussions raise concerns about student data privacy and call for safe, trustworthy AI, offering a counterpoint to unrestricted data aggregation [S19] and [S20].
MAJOR DISCUSSION POINT
Data‑driven education management
Argument 5
The curriculum is built on four pillars: (1) understanding AI and its applications, (2) technical fundamentals (data, vision, NLP), (3) societal and environmental impacts (bias, fairness), and (4) practical interaction (effective prompting).
EXPLANATION
Dr. Shabana outlines the structure of the AI‑Samarth curriculum, emphasizing foundational AI concepts, technical basics, awareness of societal and environmental consequences, and hands‑on skills such as prompt engineering.
EVIDENCE
She details the four pillars, describing “understanding what is AI and what are the applications… technical aspects like data, vision, NLP… societal impact, bias and fairness… how to write effective prompts” [170-189].
MAJOR DISCUSSION POINT
Curriculum design framework
AGREED WITH
Speaker 1, Bhanu Potta, Shri S. Krishnan
Argument 6
Critical thinking is reinforced by encouraging cross‑checking of AI outputs with verified sources and avoiding over‑reliance on AI answers.
EXPLANATION
She stresses that learners must validate AI‑generated information against trustworthy references and use AI as a tool for review rather than a substitute for independent reasoning, thereby cultivating critical thinking habits.
EVIDENCE
She advises that users should “be vigilant about using the AI the results directly… double-checking cross-checking with valid sources and not trusting the result directly” and that “it is more about not over relying on AI tools and using your judgment to validate the results” [193-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of critical thinking and verification of AI-generated information is echoed in IGF sessions on education and critical information literacy, and the panel itself stresses safeguarding critical thinking [S22] and [S1].
MAJOR DISCUSSION POINT
Embedding critical thinking
AGREED WITH
Tanushree Narain Sharma, Bhanu Potta
T
Tanushree Narain Sharma
3 arguments209 words per minute661 words189 seconds
Argument 1
AI Samarth transforms student curiosity into confidence, enabling learners to use AI as a supportive companion for difficult subjects.
EXPLANATION
Tanushree shares examples of students who began using AI for fun and later turned it into a reliable study aid, illustrating how curiosity evolves into self‑assurance when AI is positioned as a learning partner.
EVIDENCE
She recounts Shraddha’s experience: “I used to use AI tool as something, just in a fun way. But now I’m able to cross check my difficult subjects… I’m using AI as a companion” and notes that this shift reflects “curiosity is converging into confidence” [74-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion repeatedly describes “curiosity converting into confidence” as a key outcome of the AI Samarth program [S1] and [S3].
MAJOR DISCUSSION POINT
Curiosity to confidence
Argument 2
AI facilitates language learning and helps close learning gaps for students in government schools.
EXPLANATION
She highlights that AI’s multilingual capabilities support learners in mastering language skills and that targeted AI interventions reduce academic gaps, especially in under‑resourced government schools.
EVIDENCE
She states that AI “is able to support learning, sorry, language, improving into languages” and that it “reducing the learning gaps students have” [82-85].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel remarks note AI’s multilingual capabilities that support language acquisition and reduce learning gaps, especially in government schools [S3].
MAJOR DISCUSSION POINT
Language support and gap reduction
AGREED WITH
Dr. Shabana Bhattacharya, Chitra Ravi
Argument 3
The program embeds ethics and bias awareness, fostering responsible and informed AI engagement among students.
EXPLANATION
Tanushree explains that AI Samarth includes modules on ethics and bias, ensuring that learners not only use AI tools but also understand their societal implications and practice responsible usage.
EVIDENCE
She notes that AI Samarth “teaching them about the ethics, the biases, and therefore it’s a responsible engagement” [85-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel underscores the need for responsible, bias-aware AI education and cites policies for safe, trustworthy AI systems, aligning with the program’s ethics focus [S1] and [S20].
MAJOR DISCUSSION POINT
Ethics and responsible AI
AGREED WITH
Dr. Shabana Bhattacharya, Bhanu Potta
C
Chitra Ravi
3 arguments158 words per minute1304 words495 seconds
Argument 1
Teachers navigate a hope‑fear spectrum; balanced sentiment is achieved through awareness, skill development, and realistic expectations of AI.
EXPLANATION
Chitra describes how teachers’ attitudes range from optimism to anxiety, and that providing both knowledge about AI and practical skills helps them find a middle ground, reducing fear and over‑excitement.
EVIDENCE
She outlines the spectrum “between hope and fear” and explains that awareness and skill-set development move teachers toward equilibrium, while sentiment ranges from “hope leading to over-stimulation” to “fear, resistance” [121-124] and earlier mentions awareness and skill points [113-119].
MAJOR DISCUSSION POINT
Hope‑fear sentiment balance
AGREED WITH
Bhanu Potta, Dr. Shabana Bhattacharya
Argument 2
AI literacy training builds teacher confidence and agency, allowing educators to integrate AI purposefully into their classrooms.
EXPLANATION
She argues that exposure to AI tools, combined with curriculum relevance, gives teachers the confidence to experiment responsibly and embed AI meaningfully in teaching practices.
EVIDENCE
She notes that teachers have gained “confidence” and that the curriculum’s use-cases are “very, very high relevance to what the teacher is doing” and that this has led to purposeful integration [133-138].
MAJOR DISCUSSION POINT
Confidence and agency for teachers
AGREED WITH
Dr. Shabana Bhattacharya, Shri S. Krishnan
Argument 3
Teacher training emphasizes an emotionally safe environment, role‑modeling confidence, and a cascading model that empowers teachers to train peers and students.
EXPLANATION
Chitra emphasizes that creating a supportive, non‑threatening atmosphere and showcasing confident use of AI encourages teachers to become trainers themselves, spreading AI literacy through a cascade effect.
EVIDENCE
She states that “teacher training is more about building an emotionally safe environment” and that “there is a lot of role modeling that happens when there is a cascading” and that teachers “are becoming more polite” after interacting with LLMs, illustrating confidence building [198-202] and [207-214].
MAJOR DISCUSSION POINT
Safe, cascading teacher training
R
Ramya Venkataraman
1 argument175 words per minute85 words29 seconds
Argument 1
AI literacy helps teachers catch up with rapid AI adoption, reducing the feeling of being left behind relative to their students.
EXPLANATION
Ramya shares a teacher’s anecdote that AI literacy programmes enable educators to bridge the gap with tech‑savvy students, alleviating anxiety about being outpaced.
EVIDENCE
She recounts a teacher from Jharkhand who said, “I was worried that my students are getting into this world faster than I am and now with this literacy I’m able to catch up with my students” [165].
MAJOR DISCUSSION POINT
Teacher catch‑up
S
Shri S. Krishnan
4 arguments162 words per minute861 words317 seconds
Argument 1
Democratizing AI through education is critical; policies to teach AI from early grades and across all disciplines ensure inclusion and prevent anyone from being left behind.
EXPLANATION
Krishnan stresses that national policy mandating AI education from class three and extending it to all academic streams is essential for inclusive digital empowerment and to avoid creating a new digital elite.
EVIDENCE
He notes that “the government of India has already made a policy call that they would teach about AI from class three onwards” and that AI should be taught “across all disciplines” to ensure no one is excluded [229-244].
MAJOR DISCUSSION POINT
Policy for inclusive AI education
AGREED WITH
Speaker 1, Bhanu Potta, Dr. Shabana Bhattacharya
Argument 2
AI education must extend beyond computer science to all fields (arts, history, etc.) so every professional can understand and leverage the technology.
EXPLANATION
He argues that AI’s impact reaches every sector, so curricula should integrate AI concepts into humanities, arts, and other non‑technical disciplines, enabling all professionals to harness its potential.
EVIDENCE
He says “AI needs to be taught across all disciplines… not necessarily only to the computer science people… even in an area like art history, what is it that the technology can do to their own discipline” [244-246].
MAJOR DISCUSSION POINT
Cross‑disciplinary AI education
Argument 3
Technology should act as a multiplier for teachers and resources, not as a replacement, enabling better delivery of education.
EXPLANATION
Krishnan highlights that AI should augment teaching capacity, helping scarce resources stretch further rather than substituting human educators, thereby improving educational outcomes.
EVIDENCE
He states “if technology helps us to multiply that rather than substitute it and if technology helps us to actually enable our teachers to deliver better” [236-237].
MAJOR DISCUSSION POINT
AI as multiplier, not replacement
AGREED WITH
Dr. Shabana Bhattacharya, Chitra Ravi
Argument 4
Inclusion remains the summit’s core message; AI initiatives must ensure no community or demographic is excluded.
EXPLANATION
He reiterates that the summit’s overarching goal is inclusive AI, urging that programs reach every demographic so that no group is left behind in the AI transformation.
EVIDENCE
He remarks that the summit’s message is “inclusion and bringing everyone in and we have to also ensure through the education process that no one is left behind” [231-233] and again at the close “ensure no community or demographic is excluded” [249-250].
MAJOR DISCUSSION POINT
Inclusive AI for all
Agreements
Agreement Points
Universal AI literacy is essential for all stakeholders (students, parents, teachers, and the broader society) to become active contributors rather than passive recipients.
Speakers: Speaker 1, Bhanu Potta, Dr. Shabana Bhattacharya, Shri S. Krishnan
Universal AI literacy is essential for every student, parent, and child to become active contributors rather than passive recipients. Moderators highlight the stakeholder journey from curiosity to confidence and the need for ongoing support. The curriculum is built on four pillars: (1) understanding AI and its applications, (2) technical fundamentals (data, vision, NLP), (3) societal and environmental impacts (bias, fairness), and (4) practical interaction (effective prompting). Democratizing AI through education is critical; policies to teach AI from early grades and across all disciplines ensure inclusion and prevent anyone from being left behind.
All speakers stress that AI literacy must be universal, covering students, parents, teachers and the wider community, and should be embedded in curricula and policy to enable active participation rather than passive consumption [4][51-55][170-189][229-244].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for universal AI literacy is echoed in initiatives targeting the Global South, where policymakers emphasize foundational AI education to bridge digital divides and support inclusive learning [S34][S46][S51].
AI is already pervasive in everyday life and classrooms; the discussion should focus on shaping its future impact rather than debating its presence.
Speakers: Bhanu Potta, Dr. Shabana Bhattacharya
AI is already embedded in phones, homes, and classrooms; the discussion must focus on shaping the futures we want with this tool. AI can help students with personalized learning … (showing AI is already transforming education).
Both Bhanu and Shabana note that AI is already present in devices and classrooms, shifting the conversation to how to deliberately design its role and benefits [15-18][29-34].
POLICY CONTEXT (KNOWLEDGE BASE)
Observations of AI’s messy integration in schools, with constantly shifting guidelines, highlight its pervasive presence and the shift toward shaping its impact rather than debating its existence [S33].
AI enables personalized learning and serves as a teaching assistant, improving pedagogy, productivity, and equity.
Speakers: Dr. Shabana Bhattacharya, Chitra Ravi, Shri S. Krishnan
AI enables personalized learning by delivering customized lessons, tailored pathways, and individualized feedback to address each student’s gaps. AI literacy training builds teacher confidence and agency, allowing educators to integrate AI purposefully into their classrooms. Technology should act as a multiplier for teachers and resources, not as a replacement, enabling better delivery of education.
Shabana describes AI-driven personalization and assistance for teachers; Chitra reports that teachers gain confidence to use AI purposefully; Krishnan emphasizes AI as a multiplier rather than a substitute [30-37][40-44][133-138][236-237].
Multilingual and voice‑based AI lowers barriers, bringing quality content to underserved learners and engaging parents.
Speakers: Dr. Shabana Bhattacharya, Tanushree Narain Sharma, Chitra Ravi
Multilingual and voice‑based AI lowers barriers, bringing quality content to underserved learners and engaging parents in the learning process. AI facilitates language learning and helps close learning gaps for students in government schools. Teachers in low‑fee private and government schools are integrating AI despite resource constraints.
Shabana highlights AI’s voice and multilingual capabilities; Tanushree notes language support and gap reduction; Chitra points to AI adoption in low-resource schools, all underscoring equity gains [45-47][82-85][103-105].
POLICY CONTEXT (KNOWLEDGE BASE)
Multilingual and voice-enabled AI is promoted as a means to lower barriers for underserved learners, with policy discussions emphasizing inclusive digital content in regional languages [S36][S48].
Teachers experience a hope‑fear spectrum; balanced sentiment and confidence are built through awareness, skill development, and safe training environments.
Speakers: Chitra Ravi, Bhanu Potta, Dr. Shabana Bhattacharya
Teachers navigate a hope‑fear spectrum; balanced sentiment is achieved through awareness, skill development, and realistic expectations of AI. Moderators highlight the stakeholder journey from curiosity to confidence and the need for ongoing support. Critical thinking is reinforced by encouraging cross‑checking of AI outputs with verified sources and avoiding over‑reliance on AI answers.
Chitra describes the hope-fear continuum; Bhanu reiterates the need for support as teachers find balance; Shabana stresses critical thinking as part of achieving equilibrium [121-124][51-55][155-158][193-196].
POLICY CONTEXT (KNOWLEDGE BASE)
Teachers report mixed emotions about AI, reflecting hope and fear, a sentiment documented in faculty discussions and highlighted as a challenge for capacity-building in education systems [S33][S44].
Embedding ethics, bias awareness, and critical thinking in AI literacy is essential for responsible use.
Speakers: Tanushree Narain Sharma, Dr. Shabana Bhattacharya, Bhanu Potta
The program embeds ethics and bias awareness, fostering responsible and informed AI engagement among students. Critical thinking is reinforced by encouraging cross‑checking of AI outputs with verified sources and avoiding over‑reliance on AI answers. Critical thinking loop: students do their own work first, then use AI to review and improve.
Tanushree stresses ethics and bias modules; Shabana and Bhanu both emphasize validation and critical thinking practices to ensure responsible AI use [85-86][193-196][194-195].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding ethics, bias awareness, and critical thinking aligns with educational recommendations that prioritize critical evaluation of AI outputs and address overreliance on algorithms [S38][S39][S49].
Similar Viewpoints
Both see AI as an augmenting tool that enhances teacher productivity and educational outcomes without replacing educators [40-44][236-237].
Speakers: Dr. Shabana Bhattacharya, Shri S. Krishnan
AI acts as a teaching assistant, improving lesson‑plan creation, diagnostic testing, assessment generation, and overall pedagogical quality. Technology should act as a multiplier for teachers and resources, not as a replacement, enabling better delivery of education.
Both highlight AI’s multilingual capabilities as a means to improve equity, language acquisition and reduce learning gaps for disadvantaged students [45-47][82-85].
Speakers: Dr. Shabana Bhattacharya, Tanushree Narain Sharma
Multilingual and voice‑based AI lowers barriers, bringing quality content to underserved learners and engaging parents in the learning process. AI facilitates language learning and helps close learning gaps for students in government schools.
Both stress the importance of moving teachers from fear to confidence through supportive training and continuous guidance [121-124][51-55].
Speakers: Chitra Ravi, Bhanu Potta
Teachers navigate a hope‑fear spectrum; balanced sentiment is achieved through awareness, skill development, and realistic expectations of AI. Moderators highlight the stakeholder journey from curiosity to confidence and the need for ongoing support.
Unexpected Consensus
Interaction with LLMs makes teachers more polite, indicating subtle behavioral changes beyond instructional benefits.
Speakers: Chitra Ravi, Bhanu Potta
Teacher training emphasizes an emotionally safe environment, role‑modeling confidence, and a cascading model that empowers teachers to train peers and students. Teachers are becoming more polite after interacting with LLMs, suggesting AI influences interpersonal dynamics.
While the primary focus was on confidence and skill building, Chitra observed that LLM interactions led teachers to adopt more courteous language, a nuance not anticipated in the broader discussion [198-202][207-210].
Overall Assessment

There is strong consensus among panelists that AI literacy must be universal, that AI is already pervasive and should be shaped responsibly, that it can personalize learning and assist teachers, that multilingual AI can bridge equity gaps, that teachers need balanced hope‑fear support, and that ethics and critical thinking are non‑negotiable components of any AI curriculum.

High consensus across all major themes, indicating a shared vision for inclusive, ethical, and capacity‑building AI education that can guide policy, curriculum design, and implementation in the Global South.

Differences
Different Viewpoints
Risk of over‑utilisation of AI tools without proper validation versus the need for critical thinking and verification
Speakers: Chitra Ravi, Dr. Shabana Bhattacharya
Chitra notes that teachers may generate lesson plans using LLMs without knowing how to validate them, leading to over-utilisation [143-145] Shabana stresses that users must double-check AI outputs with valid sources and avoid over-reliance, embedding critical thinking in the curriculum [193-196]
Chitra warns that teachers might churn out lesson plans using AI without understanding validation, whereas Shabana argues that AI literacy must teach learners to cross-check results and not trust AI blindly, highlighting a tension between rapid adoption and safeguarding critical thinking [143-145][193-196].
POLICY CONTEXT (KNOWLEDGE BASE)
Experts warn of over-reliance on AI tools without verification, calling for ongoing human oversight and verification mechanisms to mitigate bias and misinformation risks [S39][S41][S42][S50].
Scope and mandate of AI education – universal policy‑driven curriculum versus targeted program‑based literacy initiatives
Speakers: Shri S. Krishnan, Speaker 1, Bhanu Potta, Dr. Shabana Bhattacharya, Tanushree Narain Sharma
Krishnan calls for a national policy to teach AI from class three onward and across all disciplines, framing it as essential for inclusion [229-244] Other speakers describe sector-specific programmes (AI Summit, AI Samarth) and teacher-focused curricula without mentioning a universal mandate [4][15-18][170-189][56-70]
Krishnan pushes for a top-down, compulsory AI education policy for all grades and streams, while the panelists primarily discuss voluntary, partnership-driven literacy programmes aimed at students, teachers and parents, revealing differing views on how AI education should be institutionalised [229-244][4][15-18][170-189][56-70].
Perceived ubiquity of AI in classrooms versus on‑ground infrastructure gaps
Speakers: Bhanu Potta, Shri S. Krishnan
Bhanu states that AI is already in phones, homes and classrooms, implying widespread presence [15-17] Krishnan points out that 40 % of Indian classrooms still lack computer labs, highlighting a digital infrastructure deficit [103]
Bhanu’s assertion that AI is already embedded in classrooms contrasts with Krishnan’s observation of significant hardware shortages, indicating a mismatch between perceived AI penetration and actual infrastructural capacity [15-17][103].
POLICY CONTEXT (KNOWLEDGE BASE)
While AI is often assumed ubiquitous, reports from India and other regions underscore significant infrastructure gaps, including lack of computers and connectivity, that limit actual deployment [S34][S37][S43][S44][S45].
Unexpected Differences
Assumption of AI presence in classrooms versus reality of missing computer labs
Speakers: Bhanu Potta, Shri S. Krishnan
Bhanu’s claim that AI is already in classrooms suggests widespread access [15-17] Krishnan’s remark that 40 % of classrooms lack computer labs reveals a substantial access gap [103]
The contrast between Bhanu’s optimistic view of AI’s pervasiveness and Krishnan’s factual note on infrastructure scarcity was not anticipated, exposing a potential blind spot in discussions about AI integration in schools [15-17][103].
POLICY CONTEXT (KNOWLEDGE BASE)
The gap between assumed AI presence and the reality of missing computer labs is documented in assessments of school infrastructure deficits in the Global South [S34][S44].
Overall Assessment

The panel shows strong consensus on the importance of AI literacy, but disagreements emerge around the pace and modality of rollout—whether through universal policy mandates or targeted programmes—and around managing risks such as over‑reliance on AI without critical validation. A notable unexpected tension concerns the gap between perceived AI ubiquity and actual classroom infrastructure.

Moderate: most participants align on the overarching goal, yet diverge on implementation strategies and risk mitigation. These differences could affect coordination, requiring a blended approach that combines policy support, scalable curricula, teacher confidence building, and realistic assessments of infrastructure readiness.

Partial Agreements
While the goal of universal AI literacy is shared, the panel diverges on implementation pathways—programme‑based curricula, teacher‑centred confidence building, or top‑down policy mandates—reflecting complementary but distinct strategies [4][51][170-189][74-80][198-202][229-244].
Speakers: Speaker 1, Bhanu Potta, Dr. Shabana Bhattacharya, Tanushree Narain Sharma, Chitra Ravi, Shri S. Krishnan
All agree that AI literacy is essential for students, teachers, parents and the broader society [4][51][170-189][74-80][198-202][229-244] Differences lie in how to achieve it: Speaker 1 proposes a large-scale AI-literacy programme (AI Summit) built with a partner school; Bhanu highlights stakeholder journeys and equity; Shabana outlines a four-pillar curriculum; Tanushree shares field examples of curiosity turning into confidence; Chitra focuses on teacher confidence through safe, cascading training; Krishnan calls for a national policy mandate.
Takeaways
Key takeaways
AI literacy must be universal for students, parents, and teachers so that all become active contributors rather than passive users. AI is already embedded in everyday life and classrooms; the focus should be on shaping the futures we want with this tool. AI can personalize learning, provide tailored feedback, and lower barriers to quality education through multilingual and voice‑based capabilities. AI serves as a teaching assistant, enhancing lesson‑plan creation, diagnostics, assessment generation, and overall pedagogical quality. Aggregated AI data can give education systems actionable insights for dropout risk, resource planning, and policy decisions. Teacher sentiment spans a hope‑fear spectrum; balanced awareness, skill development, and realistic expectations build confidence and agency. The AI Samarth program demonstrates that curiosity can be turned into confidence, improves language learning, closes learning gaps, and embeds ethics and bias awareness. A four‑pillar AI literacy curriculum (understanding AI, technical fundamentals, societal/environmental impacts, practical prompting) is essential for foundational knowledge. Critical thinking must be reinforced by cross‑checking AI outputs with verified sources and avoiding over‑reliance on AI answers. AI education should be cross‑disciplinary, reaching all subjects and grades (policy to start from class 3), to ensure inclusion and prevent anyone from being left behind. Technology should act as a multiplier for teachers and resources, not as a replacement, and must be experimented with responsibly.
Resolutions and action items
Scale the AI Samarth program to reach the target of ~9 million students in government schools. Continue co‑design and rollout of the AI literacy curriculum (four pillars) in partnership with the Wadwani School of AI and CSF. Implement teacher‑training modules that create an emotionally safe environment and promote confidence, then cascade training to peer teachers and students. Integrate AI literacy components into national curricula starting from class 3, as per government policy, and extend to higher education across disciplines. Encourage stakeholders to visit and engage with the AI/technology expo for practical examples and collaborations. Monitor and evaluate AI‑driven data insights (e.g., dropout risk, resource allocation) to inform education policy and planning.
Unresolved issues
Specific strategies for designing age‑appropriate metaphors (e.g., for computer‑vision concepts) for rural learners remain open. Mechanisms for systematic validation of AI‑generated lesson plans and assessments by teachers need further development. How to sustainably fund and resource AI infrastructure (devices, connectivity) in underserved schools was not addressed. Broader perspectives from other Global South countries beyond India were mentioned but not explored in depth. Long‑term governance frameworks for AI ethics, bias mitigation, and data privacy in school settings were not concretely defined.
Suggested compromises
Position AI as an assistant rather than a replacement for teachers, balancing optimism (hope) with realistic caution (fear). Create an equilibrium between enthusiasm for AI tools and critical scrutiny, ensuring teachers and students use AI responsibly. Adopt a phased rollout that combines rapid AI integration with ongoing teacher training and curriculum refinement to manage over‑reliance.
Thought Provoking Comments
AI is already in our hands. It’s in our phones, it is in our homes, it is in our classrooms whether we like it or not. The conversation about should AI be in the classroom or should AI be a learning tool is already passed. It is going to be there and it is for us to figure out how to use it productively.
Frames AI integration as inevitable rather than optional, moving the debate from ‘if’ to ‘how’, which sets a pragmatic tone for the rest of the discussion.
Shifted the conversation from theoretical deliberation to concrete implementation; subsequent speakers focused on practical strategies (personalisation, curriculum design, teacher training) rather than questioning AI’s place in education.
Speaker: Bhanu Potta
The biggest value‑add for learners is personalization… AI can help students with customized lessons, tailored learning pathways, individualized assessments and feedback, thereby lowering the barrier to quality education for underserved classes.
Identifies personalization as the core transformative benefit of AI and links it directly to equity, highlighting how technology can address systemic challenges in Indian and Global South education.
Prompted other panelists (Tanushree, Chitra) to provide concrete examples of how AI is already delivering personalised support in government schools, deepening the discussion around equity and impact.
Speaker: Dr. Shabana Bhattacharya
We have seen that a child is able to learn… curiosity is converting into confidence… AI Samarth helps children move from using AI as a fun buzzword to a companion that supports project ideas, language learning and ethical awareness.
Uses vivid student stories (Shraddha and Poonam) to illustrate the human‑centred transformation AI can bring, and introduces the idea that AI literacy also teaches ethics and bias awareness.
Provided tangible evidence that reinforced Shabana’s claims about personalization and equity; shifted the dialogue toward learner agency and the importance of embedding ethics in AI literacy.
Speaker: Tanushree Narain Sharma
The sentiment and mindset spectrum for teachers is between hope and fear… AI literacy programmes bring an equilibrium, demystifying AI and turning fear into purposeful use, while also warning against over‑utilisation such as blindly generating lesson plans.
Highlights the psychological dimension of AI adoption—teachers’ emotional response—and warns of both under‑use and over‑use, adding nuance to the earlier optimism.
Introduced a critical balancing act that prompted further discussion on teacher training design and the need for safeguards, influencing Shabana’s later focus on critical thinking and validation.
Speaker: Chitra Ravi
AI literacy refers to the foundational set of AI concepts that everybody should know about… our curriculum has four pillars: understanding AI and its applications, technical basics (data, vision, NLP), societal & environmental impacts, and practical prompt engineering.
Articulates a clear, structured framework for AI literacy that separates universal foundations from role‑specific skilling, providing a roadmap for curriculum developers.
Guided the conversation toward concrete curriculum design, leading Bhanu to ask about safeguarding critical thinking and prompting Chitra to discuss teacher‑training methods.
Speaker: Dr. Shabana Bhattacharya
We need to teach AI across all disciplines, not just to computer‑science students. Students in art history, biology, etc., must understand how AI can affect their field and how to leverage it. The government’s policy to start AI education from class three is a crucial step toward inclusion.
Broadens the scope from K‑12 and teacher‑centric perspectives to a systemic, cross‑disciplinary vision, emphasizing policy and lifelong learning.
Expanded the discussion from immediate classroom interventions to national policy and lifelong education, reinforcing the panel’s theme of inclusion and setting a forward‑looking closing note.
Speaker: Shri S. Krishnan
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the panel from abstract acceptance of AI to concrete, equity‑focused applications, and finally to systemic, cross‑disciplinary integration. Bhanu’s framing of AI’s inevitability set the stage, Shabana’s articulation of personalization and curriculum pillars provided a strategic backbone, Tanushree’s student narratives humanised the impact, and Chitra’s insight into the hope‑fear spectrum added critical nuance about teacher adoption. Shri Krishnan’s policy‑level comment then broadened the conversation to national inclusion. Together, these comments shaped a dynamic flow that progressed from ‘why AI matters’ to ‘how it works in practice’ and ultimately to ‘how it should be embedded across the education ecosystem.’

Follow-up Questions
How can AI personalization be effectively implemented in low‑resource and multilingual classrooms to improve learning outcomes?
She highlighted AI’s potential for personalized learning but noted the need to assess its real impact in Indian and Global South contexts.
Speaker: Dr. Shabana Bhattacharya
What frameworks or guidelines are needed to help teachers validate AI‑generated content and avoid over‑reliance on AI tools?
She raised concerns about teachers churning out lesson plans from LLMs without knowing how to validate them, indicating a research gap in verification practices.
Speaker: Chitra Ravi
What measurable impacts does the AI Samarth program have on reducing learning gaps, building confidence, and improving language skills among students?
She described observed benefits but called for systematic evaluation of the program’s outcomes across millions of learners.
Speaker: Tanushree Narain Sharma
How does the hope‑versus‑fear spectrum among teachers affect AI adoption, and what interventions can shift sentiment toward equilibrium?
She noted varying teacher attitudes and the need to study psychological barriers and facilitators for AI integration.
Speaker: Chitra Ravi
How should AI literacy curricula embed ethical judgment and critical thinking for students, teachers, and parents?
She asked for design principles that ensure ethical understanding, a key component for responsible AI use.
Speaker: Bhanu Potta (directed to Dr. Shabana Bhattacharya)
What are the environmental implications of scaling AI tools in education, and how can schools mitigate these impacts?
She mentioned AI’s environmental footprint as part of the curriculum, suggesting a need for sustainability research.
Speaker: Dr. Shabana Bhattacharya
How effective are AI voice‑based and multilingual capabilities in delivering quality education to underserved populations?
Both highlighted AI’s role in language accessibility, prompting investigation into its efficacy across diverse linguistic groups.
Speaker: Dr. Shabana Bhattacharya and Chitra Ravi
What challenges and best practices are emerging in implementing AI education from Class 3 onward across different Indian states?
He referenced the government policy, indicating a need to study rollout, regional variations, and policy impact.
Speaker: Shri S. Krishnan
What insights can be gathered from teachers’ experiences with AI in other Global South countries beyond India?
She was asked to share global perspectives but provided limited detail, highlighting a gap in comparative research.
Speaker: Ramya Venkataraman
What methods can be developed to teach students and teachers effective prompt engineering and critical evaluation of AI outputs?
She sought ways to safeguard critical thinking, underscoring the importance of prompt design and result verification.
Speaker: Bhanu Potta (directed to Dr. Shabana Bhattacharya)
How does AI literacy influence students’ future career pathways and their ability to engage with AI across non‑technical disciplines?
He emphasized AI education for all disciplines, suggesting research on long‑term workforce implications.
Speaker: Shri S. Krishnan
What is the impact of cascading teacher‑training models on the scalability and quality of AI literacy delivery in schools?
She discussed teacher training as an emotional‑safe, cascading process, indicating a need to evaluate its effectiveness at scale.
Speaker: Chitra Ravi
What risks arise from over‑utilization of AI tools (e.g., generating lesson plans without understanding) and how can they be mitigated?
She warned about teachers producing content without validation, pointing to a research area on misuse prevention.
Speaker: Chitra Ravi
How can AI‑driven analytics be used to predict student drop‑out risks and improve resource planning in education systems?
She mentioned AI’s potential for actionable insights, suggesting a need for studies on predictive models in schools.
Speaker: Dr. Shabana Bhattacharya

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How Trust and Safety Drive Innovation and Sustainable Growth

How Trust and Safety Drive Innovation and Sustainable Growth

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, composed of two regulators, an industry representative and a civil-society leader, convened to examine how trust and safety function as an engine for AI growth [1-4][10-13]. Host Trevor Hughes highlighted a paradox in the current AI landscape: while a deregulatory mood is evident, trust-related messaging dominates conference materials, raising the question of whether genuine guardrails are being discussed [32-33][36-38]. Alexandra Reeve Givens argued that adoption-and therefore economic success-depends on users’ trust in AI’s purpose, cultural fit, privacy and data security, and that thoughtful regulation can act as a catalyst rather than a barrier [56-64][66-69][70-71]. John Edwards explained that the UK, lacking a specific AI law, relies on the GDPR as a de-facto regulatory regime, using data-protection-by-design and impact assessments to provide common standards that foster trust [84-90][91-98][102-109]. Amanda Craig described Microsoft’s responsible-AI governance program, emphasizing the need for evolving internal frameworks and “provenance tools” to maintain trust as AI systems rapidly change [112-120][314-322]. Denise Wong outlined Singapore’s approach of applying regulation only where clear harms exist-such as AI-generated deepfakes in elections-while using sector-specific rules and horizontal principles to create a market-driven assurance ecosystem [136-141][144-148]. She also noted that existing laws can be hard to enforce without transparency, citing U.S. employment-discrimination AI tools that obscure bias and impede legal remedies, underscoring the need for disclosure regimes [154-162]. The panel reached broad agreement that trust and safety are essential for innovation, though they differed on whether additional prescriptive regulation is required beyond high-risk scenarios [164-168][236-244]. Participants identified the difficulty of prospectively defining AI harms, suggesting agile governance mechanisms such as codes of practice and outcome-driven legislation to complement more static statutes [247-256][257-260]. In a rapid “innovation round,” panelists highlighted four promising ideas: provenance tools to trace model components, agency-focused designs that restore user control, privacy-enhancing technologies like federated learning, and well-funded independent regulators or civil-society watchdogs [316-322][328-334][346-351][355-356]. John emphasized “agency” as a way to shift responsibility back to providers, ensuring users can exercise ongoing control rather than a one-time consent model [328-336]. Denise added that privacy-enhancing technologies can solve problems that law alone cannot, exemplified by the production-ready use of federated learning for secure model training [346-351]. The discussion concluded that coordinated global regulator collaboration and transparent, adaptable governance are crucial for managing AI risks while enabling innovation [274-280][285-291]. Overall, the panel underscored that building and maintaining trust through both regulatory frameworks and technical safeguards is the central challenge and opportunity for the future of AI deployment [410-419].


Keypoints

Major discussion points


Trust and safety are seen as the engine of AI adoption and innovation, even amid a “deregulatory” climate.


Trevor highlights the paradox of a deregulatory mood while trust-and-safety messaging dominates the summit floor [32-40].


Alex argues that trust is the economic driver that enables adoption and that thoughtful regulation can actually fuel innovation [56-68].


Existing data-protection regimes are being used as de-facto AI safeguards, but many argue that new, AI-specific rules are still needed.


John explains how the UK relies on the UK-GDPR and guidance to cover AI-related risks [86-107].


Denise describes Singapore’s mix of targeted legislation for clear harms and sector-specific or code-of-practice approaches for the rest [136-146].


Alex points out the difficulty of applying current laws (e.g., U.S. employment anti-discrimination rules) without additional transparency requirements [150-162].


Identifying and managing AI harms requires a blend of principles, high-risk taxonomies, and cross-sector coordination.


Amanda outlines Microsoft’s three-category “sensitive uses” framework and stresses the need to manage risk across the entire supply chain [195-220].


Denise notes emerging global harm taxonomies (e.g., International AI Safety Report) and the need for agile, code-of-practice-style governance [236-254].


John emphasizes the importance of regulator-to-regulator collaboration (ICO, Ofcom, GPA) to address fragmented oversight [274-300].


Promising technical and institutional innovations are emerging to operationalise trust and safety.


Amanda cites provenance tools and “software-built materials” to increase transparency of dynamic AI components [316-322].


John promotes the concept of “agency” – giving users ongoing control beyond initial consent [328-334].


Denise highlights privacy-enhancing technologies such as federated learning as a way to protect data when law falls short [346-351].


Alex stresses the need for well-staffed, independent regulatory bodies and civil-society watchdogs [355-356].


Overall purpose / goal of the discussion


The panel, convened by the IAPP, aimed to explore how trust and safety can act as a catalyst for AI-driven economic growth, to assess the current regulatory landscape (both existing data-protection rules and emerging AI-specific initiatives), and to surface practical governance approaches and emerging innovations that can bridge the gap between rapid AI development and the need for reliable, user-centric safeguards.


Overall tone and its evolution


Opening: Reflective and slightly skeptical, noting a “deregulatory” mood yet abundant trust-and-safety messaging [32-40].


Middle: Collaborative and analytical, with panelists sharing concrete examples, acknowledging challenges, and debating the adequacy of existing frameworks [56-107][136-146][195-220].


Later: Pragmatic and solution-focused, highlighting concrete governance models, cross-regulator coordination, and innovative tools [274-300][316-334][346-356].


Closing: Optimistic and upbeat, celebrating the work of regulators, industry, and civil society while looking forward to future “trust-centric” AI summits [399-406].


The conversation moves from questioning the regulatory paradox to building consensus on practical pathways for embedding trust and safety into AI development and deployment.


Speakers

Alexandra Reeve Givens – CEO, Center for Democracy and Technology; expertise in civil rights, civil liberties, and AI trust and safety [S1].


Amanda Craig – General Manager, Responsible AI Policy, Microsoft; expertise in responsible AI governance and industry policy.


Trevor Hughes – Representative of the International Association of Privacy Professionals (IAPP); expertise in privacy, data protection, and AI governance.


John Edwards – Information Commissioner, United Kingdom (ICO); expertise in data protection law and privacy regulation [S10].


Denise Wong – Deputy Commissioner, Personal Data Protection Commission (PDPC), Singapore; expertise in data protection, AI regulation, and privacy governance.


Additional speakers:


– None.


Full session reportComprehensive analysis and detailed insights

Trevor Hughes (IAPP) opened the session by introducing the four-person panel: Alex Reed-Gibbons (later referred to as Alexandra Reeve Givens, Center for Democracy and Technology), Amanda Craig (Microsoft), John Edwards (UK Information Commissioner’s Office), and Denise Wong (Deputy Commissioner, Singapore Personal Data Protection Commission) [1-13].


He noted a paradox in the current AI climate: while policy circles exude a “deregulatory mood,” every banner on the summit floor foregrounds trust, safety and privacy [32-40]. He asked whether this abundance of trust-related messaging signalled a genuine commitment to guardrails or merely a quieting of regulatory debate [39-40].


Alexandra Reeve Givens argued that long-term AI sustainability-for businesses and society-depends on users’ confidence that the technology is fit for purpose, culturally appropriate, privacy-preserving and built on high-quality data [56-64]. Trust, she said, is the economic driver of adoption, and well-designed regulation can “fuel innovation” by providing common standards that relieve firms from assessing trust in isolation [66-71].


John Edwards described the United Kingdom’s de-facto AI governance. Although the UK lacks a dedicated AI statute, it relies on the UK-GDPR as the primary regime, with statutory obligations such as data-protection-by-design, impact assessments and privacy-by-design [86-98]. The ICO issues guidance that maps AI-specific duties-e.g., fairness requirements from the EU AI Act-onto existing GDPR principles, delivering certainty without a new legislative lacuna [102-109]. Trevor observed that, unlike the widespread diffusion of privacy laws, the EU AI Act has not yet generated a comparable Brussels-effect for AI regulation [210-215].


Denise Wong outlined Singapore’s hybrid approach. The PDPC intervenes legislatively only where harms are clearly identifiable-such as a temporary ban on AI-generated deepfakes in election advertising [136-141]-and relies on sector-specific rules and horizontal “proto-standards” to create a market-driven assurance ecosystem [144-148]. The outcome-driven Personal Data Protection Act (PDPA) is deliberately non-prescriptive; compliance guidance is issued as advisory codes of practice that can be updated rapidly, giving regulators a flexible toolkit for emerging AI challenges [250-259].


Alexandra illustrated the enforcement challenge with a U.S. example: anti-discrimination statutes prohibit biased hiring, yet AI-driven résumé screening can conceal discriminatory outcomes, making it hard for applicants to prove a violation without a disclosure regime that forces transparency and impact assessments [154-162].


Amanda Craig explained how industry can operationalise trust. Microsoft’s Responsible AI Governance programme classifies “sensitive uses” into three high-impact categories - (i) effects on life-opportunity domains such as employment and education, (ii) psychological or physical harm to vulnerable groups, and (iii) threats to human rights [195-204]-and stresses risk management across the entire AI supply chain, drawing on decades of cybersecurity practice that favours holistic, end-to-end governance rather than point-in-time checks [209-218].


John Edwards reinforced the need for regulator-to-regulator collaboration, describing ongoing coordination between the ICO, Ofcom (which enforces the UK Online Safety Act) and the Global Privacy Assembly in response to the emerging “GROK” incident-an AI-generated content issue under investigation by both agencies [274-300].


A brief tension surfaced around whether existing GDPR-based safeguards are sufficient. Edwards argued they already provide a comprehensive safety net, while Craig and Wong saw a role for complementary, targeted regulation [112-119][136-148][250-259].


Trevor then corrected himself, stating he misspoke and meant “prospective” rather than “prescriptive” regulation [221-235]. Denise echoed the call for outcome-driven, agile tools such as codes of practice [250-259]. Alexandra noted emerging prescriptive elements, including high-risk provisions in the EU AI Act and state-level transparency laws in the United States [262-270]. Amanda highlighted regulatory sandboxes-e.g., Utah’s recent AI sandbox-as a middle ground that offers innovation space while still imposing safeguards [269-271]. Edwards’ mapping of AI obligations onto existing GDPR guidance exemplifies a hybrid, less-prescriptive strategy [103-107].


When asked to name promising innovations, the panel converged on four ideas: (1) provenance tools and “software-built materials” that can trace dynamic AI model components, increasing transparency and accountability [316-322]; (2) the notion of “agency,” shifting from one-off consent to ongoing user control, including mechanisms to delete data and withdraw from services [328-334]; (3) privacy-enhancing technologies, especially federated learning, now deployable in production to protect personal data when law alone is insufficient [346-351]; and (4) well-staffed, independent regulatory bodies and empowered civil-society watchdogs to represent the public interest and enforce standards [355-356].


Audience polling showed overwhelming agreement that trust and safety are essential for AI innovation, while opinions diverged on the extent of additional regulation needed; most participants favoured a balanced approach that targets high-risk scenarios [164-168][171-179].


In closing, Trevor emphasized the ongoing, collaborative effort across regulators, industry teams and civil-society organisations to embed trust and safety in AI. He quoted Sundar Pichai’s observation that AI’s impact will be “more profound than electricity,” underscoring the urgency of getting the digital economy right [215-218][410-419].


Key take-aways: (1) trust is the prerequisite for AI adoption; (2) existing data-protection regimes can serve as de-facto AI safeguards but may need augmentation; (3) agile, outcome-driven tools such as codes of practice and sandboxes are preferred to rigid statutes; (4) transparency, provenance and the emerging notion of “agency” are critical technical levers; (5) sustained international regulator coordination is essential to manage cross-border AI risks.


Session transcriptComplete transcript of the session
Trevor Hughes

and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for Democracy and Technology, one of the leading advocacy organizations in the world, working on civil rights, civil liberties all around the world. She’s based in D .C. To her immediate left is Amanda Craig. Amanda is the General Manager for Responsible AI Policy at Microsoft. To Amanda’s left, we have John Edwards. John Edwards is known to many. He is the Information Commissioner of the United Kingdom. And to John’s left, we have Denise Wong, who is the Deputy Commissioner of the PDPC in Singapore, the Privacy and Data Protection Commission. Welcome to our panelists. So we have two regulators, an industry representative and a civil society representative.

And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but also policy, and we’re neutral. We’re not just a company. an advocacy or a lobbying body, we bring together the people who do the work. Many of them are in the room right now who do the very hard work of data protection and AI governance all around the world. All right, let’s jump in. The title of the session reflects trust as an engine for growth. Let’s think about that just for a minute. Just a few short years ago, I think it was two and a half, maybe three years ago, this event started in Bletchley Park in England.

And in that iteration of the event, it was named the AI Safety Summit. Right around that time, the EUAI Act was being negotiated. It soon passed after that. But a lot has changed in that two or three years. This event is the AI Impact Summit. The event last year in Paris was the AI Action Summit. More recently, we have seen, the not yet fully implemented EUA. AI Act become subject to an omnibus package where some of the expectations of that original act are being dialed back a little bit. And we’ve seen broad critique of regulatory structures, trust and safety structures that might inhibit growth and innovation in AI. There clearly is a deregulatory mood in the air.

In fact, I think it’s notable that there has not been much discussion of law or regulatory initiatives that might create guardrails to help guide the adoption of AI. So clearly, we’re in an odd moment, and an odd moment for this panel. But as I walked around the campus of this event, this enormous campus, I noted something that was, I think, quite significant. Just about every second banner or poster, just about every large printout, printed word in the show floor, somewhere had trust. safety or privacy as part of the messaging. In fact, the sutras, and we’ll talk about them as we go through the session, the principles announced by the Indian government largely around trust and safety.

And so what gives? What’s the dichotomy here? At one moment we are saying it’s a deregulatory mode, we step back. Well, at the same time, we are actively embracing and discussing trust and safety, risk management, protecting consumers, citizens, human beings as they engage with AI. So do we care or not? Are we actually in a deregulatory moment, or have we just gotten quiet about the need for guardrails and trust and safety in these systems? I would say for business, risk exists regardless of whether there’s a law in place or not, and so businesses have an imperative to respond. I’m going to tell a very, very quick story, and that is that in 1891, when electricity was first being brought into the White House in the United States, then President Benjamin Harrison and his wife, Caroline, were actually terrified of flipping the light switch.

And so they hired the electrician from the Edison Company, a man named Ike Hoover, who went on to become the chief usher of the White House. They hired him to flip the light switch. I think the message of this story is that we won’t use it if we don’t trust it. And so as AI is being pulled through the walls of our world, as it’s creating light and switches and tools for us to use, I think we need to ensure that we’re comfortable flipping those switches. And that is the topic of our panel today. So let’s jump in. And our first question is going to be about just the moment that we find ourselves in.

And I’m going to start with Alex. why are trust and safety important to innovation? And maybe speak to this dichotomy that I’ve highlighted. Why is it in this moment that we can’t talk about regulation, but everywhere it seems we’re talking about trust and safety?

Alexandra Reeve Givens

Yeah, first of all, thank you for convening us, and it’s a pleasure to be here. I think you really hit the nail on the head in your introduction, which is when we think about the long -term success and sustainability of AI, and that is business sustainability for the companies, as well as societal sustainability for all of us. The secret is not just an acceleration, the biggest, fastest, most capable model. The real story is one of adoption, and that has been the overwhelming theme of the summit this year. And for people to adopt this technology, they need to trust it. And that’s trust in multiple different facets, right? Is the tool fit for purpose? Does it work in your language?

Is it appropriate for your culture? Will it protect your privacy? Is your data going to be secured? What is the quality of the information that is grounding that model and those outputs? And I think people are really waking up to this, and they’re demanding more. This is both as individual users and then, of course, for enterprise customers, too, who themselves are saying, we’re on the front lines thinking about how to integrate AI into our business operations. We’re the ones who will likely be sued if this goes wrong. So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these tools being adopted. And the other thing that I would add is what we see is not only that trust is important for innovation in the abstract, but this is also where responsible, thoughtful regulation can be fuel for innovation as well.

Because the same way that we want to be able to drive cars without all of us being experts in how a motor works, Product liability and good laws around the creation of these tools help make sure they outsource some of that work for us so that we don’t all have to be doing the individual labor of deciding whether we can trust. So many times people will create this false framing of regulation versus innovation, as opposed to thoughtful being regulation being the fuel that actually allows us to sell, buy, and use these tools.

Trevor Hughes

Excellent. Fascinating. John, I’m going to jump to you, and Amanda, I will come right back. But I’m going to jump to you. The U .K. doesn’t have an AI law in place. It has lots of laws that will apply to AI. I think data protection and the GDPR Act in the U .K. is a great example of that. But talk to us a little bit about regulating in the absence of an AI law. What does that look like in the U .K.? And do you see organizations exhibiting behavior that demonstrates that they’re focused on the ideas that Alex suggested, that trust and safety matter regardless of the relationship? What is the regulatory structure that sits over them?

John Edwards

Yeah, absolutely. Absolutely. Absolutely.

Trevor Hughes

There it is.

John Edwards

No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So we have a de facto regulatory regime under the UK GDPR. Coming back to your comment about trust, it’s so important, and there is a role for regulation actually in assisting businesses because businesses are trying to deliver that trust proposition to consumers. But by what metric? Right. And that’s, I think, where regulation can provide a common standard. So, you know, we require, it’s a regulatory tool, that you have to do data protection by design. You have to do data protection impact assessment. You know, we expect. Privacy by design. We expect. respect risk assessment. So all of these things are regulatory requirements, but they are also tools that help intermediate between businesses and the consumers to demonstrate that there is a basis for trust.

And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And that’s a role that we do discharge. Judge, to your point about the absence of prescriptive regulation in the UK on AI, we don’t see that particularly as a deficit. I mean, I think there’s a lot of policy work going on in areas where policymakers and regulators do need to step in. That’s ongoing, and I won’t comment on that. But, you know, there are ongoing issues about the distribution of proceeds from the use of creative materials and the like. That carries on. But… In the absence of an explicit rule, it’s incumbent on my office to deliver safety and confidence and metrics for industry and to deliver certainty over what can be seen as an uncertain law.

So we’ve gone out and said, well, here’s how we see the technology -neutral general principles of the GDPR apply when you train a model, for example. We see, for example, the EU AI Act in Article 10 talks about the need for fairness. Well, we’ve been able to articulate those obligations by way of guidance, linking it back already to the GDPR principles. So, you know, there’s a mapping. I don’t think at the moment for the available applications of artificial intelligence technologies that there is a lacuna. It’s there. With the GDPR. And we are there to provide. confidence and certainty about how you apply that, how you improve your products with it, and how by doing so you engender that trust that you described at the outset.

Trevor Hughes

Excellent. Okay, so Amanda, tell us, do you agree that there’s not a need for additional rails, traffic indicators in AI? Is John right that the existing regulatory structure is really providing enough guidance or is it the case that Microsoft is using internal principles, frameworks, standards that you might adopt to build programs and services that you think meet the expectations of trust and safety of the marketplace?

Amanda Craig

Thank you. From a Microsoft perspective, we are focused on implementing our responsible AI governance program and see opportunity for lots of different governance models that governments could pursue in terms of implementing existing regulation, developing additional regulation that complements that existing regulation. I think the through line for us, the bottom line, is very much what Alex started us off with, that we do very much see, we’ve seen through multiple generations of technology, we’re not going to have adoption, we’re not going to have use of this technology without trust. And we need to have governance programs at technology companies. We need to have governance efforts by governments that are ensuring that we have an evolving conversation about trust.

Because if I pull the thread on the analogy you started us with, like how do you flip on a light switch and that can be scary when you’ve not done it before, I think the other thing that is very challenging, true about this technology, is that it is also very dynamic. It is evolving very quickly. quickly. And people might even be scared that, like, they won’t know where to find the light switch next week. And that brings a whole different set of challenges. And so that requires not just confidence in how you are able to sort of trust the technology today, but also that there’s trust in a governance process that will continue to iterate and evolve alongside the technology.

Trevor Hughes

Excellent. Denise, help us then here. I know Singapore has released guidelines, standards around AI. Tell us about the Singaporean experience in thinking about regulating trust and safety in AI.

Denise Wong

Thanks so much, Trevor. And thank you to the IAPP for putting this together and for having us. Maybe I’ll answer that question by linking some of the concepts that we’ve talked about. And that sort of underpins our philosophy. Trust and safety is the outcome that we want. You know, we want to create the necessary conditions for the society. to thrive for the public and the enterprises to use the technology with confidence. So AI for that public good. To do that, we need governance. We need a framework of thinking about how we can govern the technology, and we’ve been doing this for all sorts of technology. AI is but one. Regulations are a mechanism, a type of governance mechanism that you use when the necessary and correct conditions exist.

And so that map of that concept informs how we think about our governance approach. So on issues that are very clear, where there are clear harms, we have stepped in to regulate. An example of this is elections regulations that we put in place where we prohibited the use of AI deepfakes to represent candidates. It was time -limited. It was for the period of elections, but we stepped in and put a law in place for that. We also have laws for AI creating online harms, as well as AI in scam situations. So that is the part where we regulate for clear and present harms. For the rest of it, a lot of it we leave to sectoral regulations where there’s already a web of existing regulations, and on specific issues as well.

John and I and many of us are in the data protection field where, as John has said, there are already existing laws that can be tacked on, updated, reviewed in order to deal with this new technology that has come about. So where we have done AI governance frameworks and tools that you’ve mentioned is where we’ve seen a need to create some sort of horizontal principles and platforms to think about the sector agnostic general issues on transparency, on what model governance for corporates could look like. We haven’t seen the need to regulate that horizontal layer just yet, but certainly a need to articulate some of these principles. And that also allows us to create more certainty for the market, to give them some direction that actually this can be a market -driven assurance system that has demand, has supply, and has what we’ll call them proto -standards, early days of standards about what good looks like.

So that’s the work that we’ve been doing and trying to create and simplify. We have a seed and an assurance ecosystem that sits, I would say, adjacent and complementary to regulations where they’re needed.

Trevor Hughes

Fantastic. Please, please.

Alexandra Reeve Givens

So just to comment on that, one area that I think is proving very important, and people are discovering this across jurisdictions, is even where existing laws apply, there is a problem where AI systems make it hard to know whether or not those laws are being broken. So this is where that transparency layer you were articulating really becomes important.

Trevor Hughes

Give us an example.

Alexandra Reeve Givens

Yeah, and I’m going to make it U .S.-centric just because it’s the one that’s top of mind, so forgive the bias here. So in the U .S., we have equal employment laws. It is against the law to discriminate in the course of hiring. So in theory, a piece of software that perpetuates discrimination against particular candidates, for example, not considering the resumes of people over a certain age, is violating an existing law. So people will say we don’t need any further regulation. We’re done. The problem is you can tell in a human run system where it was just a bad apple in the HR department, it’s been historically easier to prove that case. Now, when it’s AI powered software making that decision, it is really hard as a worker who’s just put in a resume and never got an answer back to know if something was going wrong.

If you actually get up your courage and file a case, it is really hard to prove your case if there is discrimination. And so without some type of disclosure regime that requires transparency in these high risk scenarios, high impact scenarios, to have transparency and disclosure about the system that is being used, impact assessments to make sure that discrimination isn’t happening, you actually don’t get the remedy that people really need under existing law. And so that’s where I think this horizontal piece can complement the sector specific vertical laws in a light touch way, but actually gives meaning to the laws on the books.

Trevor Hughes

So I think that’s a great example of the harm trigger that Denise described, that we identify a clear harm and that may be a place where additional regulatory structure might be helpful. I think we heard pretty significant consensus across our panel. Trust and safety is good. That’s good that we’re there. That’s a great consensus to achieve. And not complete consensus on the idea that additional regulation is needed yet. With the exception perhaps of a few scenarios in which we can identify high risk or harm. Let’s go to our audience for a second. Help us describe the relationship between innovation and regulation in AI. If you think it’s a great relationship, thumbs up. If you think it’s a bad relationship, thumbs down.

If you think it’s complicated, make it complicated. What do we think? Oh, I see a lot of content. What does our panel think? I think it’s a good relationship between innovation and appropriate regulation. Fascinating. We have a very strongly opinioned audience here. That’s great. Let’s talk about regulation again and dive in just a bit deeper. I think one of the things that’s tremendously challenging is prescriptive regulation, trying to understand harms that might occur before technology is fully adopted broadly in the marketplace. I’m a veteran of the privacy world going back to 1995, 1996. And in the late 1990s, we were talking extensively about cookies and how do we regulate cookies and the privacy issues associated with cookies.

Guess what? We’re still talking about cookies often. And I know for many of the privacy and data protection, they’re nodding already. They’re crying a little bit because it’s so, so painful to implement. implement many of the cookie banners and cookie consent mechanisms that we have. And I’m not entirely sure, we might get John to admit this even, that, you know, those cookie banners are actually driving the outcomes that we hope for. We identified the biggest and worst harm or concern and dedicated resources appropriately to that. Amanda, I’m going to jump right to you. Talk to us a little bit about identifying those harms. Alex gave us one, which is perhaps AI reviewing HR submissions, resumes, CVs, and language in those CVs may actually create results that were not intended, that create bias, that, you know, in a human -driven system would be easier to find, in an AI -driven system just much, much harder to find.

That’s a great example. How do we identify those prescriptive harms, those harms that we’re not quite sure about yet, that may emerge? Do we do it through principles, through ethics, through what?

Amanda Craig

I think all of the above to some extent. Part of why we start with principles in our governance program is I think it’s helpful to orient towards what do we care about, right, as we then try to build a program that realizes those outcomes. I think we also can look at existing law that reflects where there are harms, like in the employment context, where people could be mistreated or treated unfairly that we know we care about. And there’s been a lot of effort and regulation to define high risk, high impact. At Microsoft, we have something called the sensitive uses sort of scenarios where, you know, we have three categories where technology could have like an impact on someone’s life opportunity or consequential life impact of something like employment or education opportunities, for example, or how someone’s treated under the law otherwise, all sort of fit in that context.

We have the second big category of harm that we have defined as around sort of the risk for psychological or physical harm. So think about vulnerable populations there. Think about the use of AI in critical infrastructure. And then the third category is the use of AI that impacts human rights. So, you know, we have our way of defining what is really high impact. You know, a lot of governments, again, have taken different routes. I think the other thing that we’ve seen is the kind of emergence of a conversation around sort of technology itself that poses specific high risks. For example, highly capable models that have a whole other set of risks that are the risks that are being defined.

And that’s one thing that I just want to draw out as we think about this and drawing upon what I feel like, you know, and I didn’t grow up in the privacy world, I grew up in the cybersecurity world. And one of the things that I think a lot about as we work on, you know, defining these harms and figuring out what to do about them, that we can learn from the kind of… decades of work on cybersecurity is the challenge of thinking about how to address risk across the supply chain. And I think it’s a slightly different conversation in AI than it has been traditionally in security with software and cloud technologies. But there is like a common principle or approach that I think we should really look at closely, which is, you know, we are oftentimes in the context of AI thinking about risk and harm where the technology is actually used, right?

And then what’s difficult is figuring out what do we do across the whole supply chain to manage that risk and have that be cohesive. And one of the things that in the cybersecurity context, we know what the risk or harm, it’s much simpler. It’s security risk, that we care about. But we have the same challenge in terms of like, how do we manage that risk across the supply chain? And one of the challenges over decades of work in the cybersecurity context is… Instead of wanting to… put emphasis on one part of the supply chain or the other at any given moment instead of, like, really dealing with the really hard governance challenge that it is everything at once.

And so I think when we, you know, think about the complexity of defining harms in the AI space, that’s important work to do. And also, in the context of managing risk for any of those harms being realized, we also need to think really hard about looking across the whole supply chain at once. Even though it’s hard from a governance perspective, that’s going to be most important for managing the risk ultimately.

Trevor Hughes

Fantastic. And I misspoke. It’s prospective, not prescriptive regulation. But John and Denise, maybe talk to us a little bit about that. And let me frame it for you both. And Denise, we’ll have you start. Clearly, with data protection regulation, we have had the GDPR now for over seven years. And the effect of that stands out. And I think it’s important to think about that. And I think it’s important to on the global policy environment has been enormous. We now have over 120 countries that have privacy laws in place. Many, many, many of them have genealogical lines that point back to the GDPR. And yet we haven’t seen that in AI yet. The EU AI Act has not taken off around the world.

We don’t see a Brussels effect happening on AI. Is it because the challenge of identifying harm, the challenge of prospectively trying to identify what might

Denise Wong

You always ask me the tough questions. I think, first of all, the harms question, because I think that’s relevant to the regulation question that you’re asking. I think the starting point must be that every country has a unique context. And it’s the job of the government to figure out how to do that. And I think that’s the challenge of prospectively trying to identify what’s harmful, what’s harmful. harmful to their society. I think there’s going to be a huge amount of overlap, but at the end of the day, what’s harmful in one context that’s harmful in India may not be the same as what’s harmful in the US. And the cultural context matters. That said, I think there’s actually increasing consensus, I feel, about what harms or archetypes of harms there are vis -a -vis AI.

And we see that, for example, the International AI Safety Report is starting to anchor some of this taxonomy and sort of buckets and archetypes of harm, and we also see that beginning to happen at Iceland, for example. Those conversations are happening. How does that link to prescriptive regulation or legislation? I think that if the harms are still being coalesced and formed, it’s quite difficult to be very prescriptive about how you deal with those harms, because that, by definition, is sort of changing and still coalescing. It’s still quite nascent. That’s not to say… we should step back. I think we just probably need a slightly more agile way of thinking about that broader concept of governance.

So in the social media context in Singapore, we did it via codes of practice. So we have a broad sort of umbrella legislation that creates a legislative frame for which these codes of practice apply. But the codes of practice can be updated more easily. Same thing, actually, with our data protection law, the PDPA, which is structured quite differently from the GDPR. Our PDPA is actually very not prescriptive. It’s outcome driven. It’s fairly broad. But most of the guidance that PDPC provides, and these are for compliance, is done in advisory guidelines. So I think there are regulatory mechanisms you can use that are less prescriptive than primary legislation. And that gives you enough levers. It’s tools in a toolkit, basically, to be able to deal with the harms and with the problems that the society is facing.

Trevor Hughes

Excellent.

Alexandra Reeve Givens

To dispute you a little bit on the lack of a Brussels effect, I will say, I mean, going actually back to Denise’s point, so not only is there some harmonization happening around the scoping of the harms, I think that certainly is happening, but also on potential points of intervention. So, for example, one of the key elements of the EUAI Act is looking at high -risk scenarios and having their remitigations in place. We have similar laws under consideration in multiple states in the United States, one in the books already in Colorado. They would never say it is a copycat. It came from its own origins. But it is lawmakers thinking what is an appropriate right -scale intervention to that particular risk.

You can look at the recent transparency laws that were passed in California and New York, very similar discussions to the Code of Practice for General Purpose AI models that came out under the EU practice. You can look at the EUAI Act’s provision for regulatory sandboxes and this notion that we want small and medium -sized enterprises, and others to be able to innovate and get a little bit of forgiveness or wiggle room under the laws as they figure out how the regulations apply. That law just got passed in Utah. So there are these glimmers where we are seeing… smart solutions to specific problems and people learning from each other.

Trevor Hughes

I think in the absence of that umbrella AI standard, that interaction with fellow regulators across disciplines and domains becomes really important. Or I will ask you, does it become really important?

John Edwards

Yeah, it is. It’s hugely important that we coordinate. You know, these are new challenges that we’re all facing. On the GROK issue, obviously, it’s under investigation, so I won’t be able to say too much about it. But, you know, we’re interested in what, you know, how models are trained, what data they’re trained on, what output filters are included, what kind of safety mechanisms. I’m interested in what kind of ingestion there is of data when it’s used at that level. But there’s some complexity in that case as well because, you know, you’ve got users using a tool that’s amplified. It’s amplified by social media. I don’t know whether the same functionality is available in any other image generation tool that just hasn’t got the same media because it’s not amplified by a social media platform.

And but, you know, very early on, I think I was back home in New Zealand, actually, on about the 5th of January and started to see this. And I messaged back to the office and said, what are we doing? What’s Ofcom doing? How are we connecting to our international colleagues? And that’s so important. And so we’ve, you know, we’ve messaged into GPA. We’ve coordinated very closely with Ofcom. And, you know, we have to cope with the fact that regulation is a little bit fragmented. So Ofcom is responsible for administering the Online Safety Act in the UK. Now, that is legislation that seeks to regulate the kinds of harmful content that can be delivered to a child’s device, for example.

Right. I see this thing. Is that regulated by online safety? If so, it’s Ofcom. How did that get to me? Well, that depends on how the underlying data was processed. That becomes an ICO, you know, GDPR issue. So we need to be working very, very closely, and we are. But also with the crock issue, one of the very early things we did was to reach out to our colleagues in the GPA, the Global Privacy Assembly, and say, who else is looking at this? Let’s make sure that we’re not sort of treading on each other’s toes, or at least that we’re sharing information, that we’ve got the same ideas, that we think the same way. And that can be tremendously powerful, whether or not you can point to a regulation that that app or that platform is clearly in breach of.

To describe a set of expectations about harm mitigation across a coordinated group of global regulators, I think can be quite powerful. And, you know, just to see how, you know, the alternative for some of these platforms is not necessarily being investigated and trained by the ICO. So it’s like what I noticed the first day that I was here when I went to flip TikTok on and saw this is not available in this country. So if the offering in a particular jurisdiction does not meet the standards and norms of that jurisdiction, these organizations need to understand that they can be switched off, that they are not actually all powerful.

Trevor Hughes

I just have the image of the U .K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round, and please do keep your answers short, but innovation is not limited to technology, is not limited to business practices. It’s also very powerful in the… The privacy -enhancing, safety -enhancing tools that we use inside organizations. It’s in regulatory structures. Denise has mentioned regulatory sandboxes, or maybe it was Alex, but we’ve heard regulatory sandboxes mentioned. What is the one innovative idea in trust and safety that you think holds real promise? And I’ll let you do one sentence to explain it, but this is a speed round. So we’ll start with Amanda and then work down and come back to Alice.

Amanda Craig

One sentence. Okay. Is that my sentence? I think about provenance tools as an area of innovation. Again, this is calling upon my cybersecurity background, but I think, you know, something like agentic AI is an area where there’s a lot of interest -concerned governance momentum. And one of the challenges is being able to look at something that is fundamentally not just like one technology. It’s technology. It’s a bunch of very dynamic components, models, platform tools, services, applications all working together. And while that feels like a really new, hard challenge, we actually can draw upon what we know of software to actually be a set of dynamic components as well. And one of the ways that we’ve figured out how to govern that or working towards figuring out how to govern it is with software -built materials, something that really allows you to have the ability to track those dynamic components.

And I think that’s something we can apply to agents.

Trevor Hughes

So it increases transparency. It tells you, you know, which algorithm or which system this might have come from. It helps with accountability broadly. Yeah. Excellent. John, what’s the most promising trust and safety innovation that we have?

John Edwards

Well, you challenged us with one word. So I’m going to go with agency. Agency. And I think it’s, for me, it’s a word that, you know, so much of our world is dominated by consent, which is, I won’t say broken, but it’s under strain as a useful concept. Agency, I think, has capacity to recognize that, the objective is to maintain and to restore and maintain an individual’s agency as it uses any product. And that’s consent. It’s actually making sure that provenance is delivered, for example. You can’t have agency if you don’t know the origin of the data that is delivering this agentic miracle to you. It gives you tools at the other end. And consent is always conceived of as a front -end authorizing concept.

But agency says, okay, I’ve done that now. Where’s my delete everything button? Or my I don’t want to do this anymore button. So I think if developers can be thinking about how they deliver the best possible service in a way that restores and maintains the agency of the consumer, I think that will go a long way to addressing some of the problems that we’re seeing. And I think

Trevor Hughes

Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think consent is a burden -shifting wrench that moved much of the burden to the data subject, to the individual. Agency, it sounds to me, is an idea to move back to those who might be accountable and have them have fiduciary or stewardship responsibilities for that person. Denise?

Denise Wong

I would pick privacy -enhancing technology. I think it’s an interesting technological way to deal with at least one part of the problem, which is how do we secure the data, how do we make sure that the personal information is well protected. And it’s advancing so quickly. So two years ago, we were looking at federated learning for training of AI models, and no one could figure it out. I think it’s actually being done in production now. So there is – I’m a lawyer, so I can say this. Sometimes the law cannot solve the problem. But actually, maybe another technology can.

Trevor Hughes

Fantastic. Alex?

Alexandra Reeve Givens

Well -staffed, empowered, independent regulatory bodies that can help represent the public interest. Wow. And because in some countries those are under attack right now where that is not available, well -resourced, technically informed, independent civil society that can play that role in the interim.

Trevor Hughes

Fantastic. Yeah, the importance of having watchdogs, yeah, entities that are watching and observing, commenting, enforcing, really powerful. So there are four great innovations, provenance, agency, privacy -enhancing technologies, and well -funded regulators or civil society. Well done. I think that is a great start. Let’s do another audience poll. How many of you here in this audience are responsible for AI or AI governance, AI ethics, AI safety inside your organization? Hands up. It’s almost the whole room. Keep your hand up if you’re also responsible for something else in addition to AI or it’s just AI. It’s more. I think it is a pretty complete overlap, almost a complete overlap. There’s at least a significant percentage that were responsible for more than one thing, and one of those things was AI.

I think that’s an example of the complexity that we see inside organizations today. John described the coordination necessary between Ofcom and the ICO in the Grok investigation, which is ongoing, because there was not a single place where regulatory authority existed to address that concern. This is a really complex environment. The number of harms or issues span from children’s safety to intellectual property. From bias and algorithmic discrimination all the way through deepfakes and other things. Alex, how do we… How do we put that all into a pot and make it something meaningful?

Alexandra Reeve Givens

Well, what if you can’t put it all into a pot? A pot is a common denominator in all of those things, but AI is a tool that touches everything. So I really do think you actually need a nuanced approach that looks at a particular risk, what those mitigations are for that risk, and then goes from there. The privacy considerations when you are sharing your most intimate concerns and questions about the world with a chat bot is very different than these questions about deep fakes and fraud and impersonation. It’s just you need to have a different legal regime. I think some of the common elements that run through, one is that transparency and rigorous approaches to risk mitigation really matter, and that can either be through regulation or through principles and best practices with meaning and standardization and watchdogs reading those disclosures.

And the second is this burden of the user. So when Trevor introduced me, we described my org. organization. We represent users’ rights around the world. I am all for user empowerment. And also, we cannot put the burden solely on users to navigate this moment. Indeed. And that is the major lesson of the cookie example you were saying before. We didn’t misdiagnose the harm. We misdiagnosed the remedy, which was the burden on individual users when we don’t actually have market choice, nor the time or mental energy to just read a whole bunch of disclosures and act alone. And so solutions that acknowledge the harm are tailored, but also take that burden off individual users. So you’re empowering users, but not burdening them or leaving them to essentially defend themselves unprotected.

We have to think about that.

Trevor Hughes

Okay. Sadly, we are at the end of our time, but we have one more pop question for all of you, and we’re going to let this be our close. We have gone through the AI Impact Summit, the AI Action Summit, the AI Safety Summit. Five years from now. What is the AI summit going to be called? What’s the word that’s going to be in the middle there? So this is a one -word answer again. What’s it going to be? I know it’s a tough question. So, Denise, I’ll start with you because you’re able to handle the toughest questions. Ah, the AI Trust Summit. Okay, John?

John Edwards

Nostalgia.

Trevor Hughes

Nostalgia.

Amanda Craig

Thriving.

Trevor Hughes

Thriving, AI Thriving Summit. Okay.

Alexandra Reeve Givens

I’m going to cheat. For the people, by the people. It’s more words. They’re so strange.

Trevor Hughes

Some of the people. It’s hilarious. To get on a poster. Here’s what I know. I know that there is incredibly hard work that needs to be done to bring trust and safety to this ridiculously powerful technology that I think, as Sundar Pichai says, will be more profound than electricity. That hard work happens every single day inside organizations. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. that are implementing these tools inside civil society, that are watching and guiding that behavior inside regulatory offices that are navigating to ensure that marketplaces around the world, that the digital economy gets this right. I feel better because people like this are doing that work every day, and I hope you’ll join me in thanking them.

Thank you very much. Thank you so very much. Thank you. Thank you. Well done. Well done. You were fantastic as expected. So what is the I am and I fly to London for magic.

Related ResourcesKnowledge base sources related to the discussion topics (30)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Alex Reed‑Gibbons, CEO of the Center for Democracy and Technology, was on the panel.”

The knowledge base lists Alex Reed-Gibbons as the CEO of the Center for Democracy and Technology, confirming her identity and role on the panel [S9].

Confirmedhigh

“The UK ICO has issued guidance that maps AI‑specific duties onto existing GDPR principles, providing certainty without new legislation.”

The UK Information Commissioner’s Office has published dedicated guidance on AI and data protection for professionals and technologists, confirming the existence of such guidance [S20].

Additional Contextmedium

“The United Kingdom lacks a dedicated AI statute and relies on the UK‑GDPR as the primary regime for AI governance.”

While the knowledge base confirms ICO guidance, it also notes that the UK is actively seeking alignment with the EU on AI policy, indicating that AI-specific regulatory measures are being considered but not yet codified as a separate statute [S97].

External Sources (98)
S1
How Trust and Safety Drive Innovation and Sustainable Growth — -Alexandra Reeve Givens- CEO of the Center for Democracy and Technology, one of the leading advocacy organizations worki…
S2
Open Forum: A Primer on AI — Artificial Intelligence (AI) has been widely adopted across various sectors, including facial recognition, online shoppi…
S3
How Trust and Safety Drive Innovation and Sustainable Growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S4
How Trust and Safety Drive Innovation and Sustainable Growth — I just have the image of the U.K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round…
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S6
How Trust and Safety Drive Innovation and Sustainable Growth — – Alexandra Reeve Givens- Trevor Hughes- Amanda Craig
S8
How Trust and Safety Drive Innovation and Sustainable Growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S9
https://app.faicon.ai/ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S10
How Trust and Safety Drive Innovation and Sustainable Growth — -John Edwards- Information Commissioner of the United Kingdom
S11
How Trust and Safety Drive Innovation and Sustainable Growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S12
How Trust and Safety Drive Innovation and Sustainable Growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S13
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think con…
S14
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — -Regulatory Challenges for AI Agents: Panelists discussed how current regulations like the EU AI Act were not designed f…
S15
WS #362 Incorporating Human Rights in AI Risk Management — Criticism of lack of enforceability but potential value in encouraging company participation, challenges in articulating…
S16
Ethics and AI | Part 6 — A significant focus of the Act is placed on transparency. It mandates that users be informed when they are interacting w…
S17
European Council gives final approval to EU AI Act — Today, on 21 May, the European Councilgave its final approvalto the Artificial Intelligence (AI) Act, a pioneering legis…
S18
The EU AI Act must address human rights concerns, urged human rights organisations — TheEuropean Parliament plans to propose stricter rules for foundation models, such as ChatGPT, under the AI Act to regul…
S19
National Strategy for Artificial Intelligence — Citizens and businesses must have confidence in artificial intelligence whenever it is used by the public authorities, s…
S20
UK Information Commissioner’s Office publishes guidance on AI and data protection — The UK Information Commissioner’s Office has published aGuidance on AI and data protectiondedicated to professionals wit…
S21
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S22
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S23
Do we really need specialised AI regulation? — The pyramid reveals a clear pattern:most layers of AI are already regulated. Hardware is controlled, data is protected (…
S24
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S25
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Involving citizens in the decision-making process fosters inclusivity and builds trust. Government institutions must be …
S26
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S27
AI Meets Cybersecurity Trust Governance &amp; Global Security — Maria stresses that identifying AI‑related harms requires joint effort from governments, civil society, and industry, no…
S28
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S29
From principles to practice: Governing advanced AI in action — Chris argues that some AI risks require entirely new risk management approaches because they can appear very quickly at …
S30
Harmonizing High-Tech: The role of AI standards as an implementation tool — Technical standards, in essence, operationalise policies, especially in crucial domains such as safety and privacy. Stan…
S31
Who Watches the Watchers Building Trust in AI Governance — Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at…
S32
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Lucia Russo: Maybe I’ll go first. Yes, you’re totally right. We are seeing many policies and regulations emerging. A…
S33
Building Trust through Transparency — Finally, there is skepticism expressed towards additional regulations in combating corruption. A speaker argues that inc…
S34
47th US Presidency, Early Thoughts / DAVOS 2025 — The overall tone was analytical and speculative, with experts offering differing viewpoints. There was a mix of cautious…
S35
Building Trust through Transparency — In conclusion, the discussion on consumer rights, transparency, technological advancements, and the need for enforcement…
S36
How Trust and Safety Drive Innovation and Sustainable Growth — Alexandra Reeve Givens This insight identifies a critical gap in current regulatory approaches – that AI creates an ‘en…
S37
Closing the Governance Gaps: New Paradigms for a Safer DNS — Concerns were raised about the practice of using trusted notifiers to regulate content, emphasising the need for transpa…
S38
How Trust and Safety Drive Innovation and Sustainable Growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S39
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S40
Technology Rewiring Global Finance: A Panel Discussion Summary — Regarding regulation, the panelists emphasized the need for activity-based rather than institution-based oversight, with…
S41
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — It is important to understand that the regulatory sandbox is not a decision-making or exemption-providing mechanism. Ins…
S42
Agentic AI in Focus Opportunities Risks and Governance — Enterprise guardrails & risk management Industry favours globally‑recognised, voluntary standards rather than prescript…
S43
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — – Carla Licciardello- Tee Wee Ang Legal and regulatory | Economic | Infrastructure Modern regulation requires innovati…
S44
Who Watches the Watchers Building Trust in AI Governance — The speakers demonstrated strong consensus on the urgency of AI governance challenges, the inadequacy of current approac…
S45
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you. My name is Sonny. I’m from the National Physical Laboratory of the United Kingdom. There’s a few wor…
S46
Who Watches the Watchers Building Trust in AI Governance — Summary:The speakers demonstrated strong consensus on the urgency of AI governance challenges, the inadequacy of current…
S47
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Clara Clark Nevola:So, my name is Clara Clark Nevella and I’m joining you from the UK this morning. Well my morning I gu…
S48
Can Digital Economy Agreements Limit Internet Fragmentation? | IGF 2023 Day 0 Event #76 — Maiko Meguro:Thank you. Thank you, thank you, Bill. And thank you for having me here. It’s quite a pleasure to join this…
S49
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: I want to address this with an anecdote. Because I am Norwegian, I feel partly responsible here. I mean, I…
S50
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Our priority is to be a smart, confident adopter of AI. We are using it to lift productivity, support better public serv…
S51
Keynote-Roy Jakobs — Jakobs consistently emphasized that healthcare’s reliance on trust makes AI governance particularly critical. Healthcare…
S52
Do we really need specialised AI regulation? — Concerns about AI’s long-term risks should be approached cautiously. The precautionary principle, which advocates preven…
S53
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment introduces a crucial distinction between prescriptive regulation and outcome-based regulation. It’s thought…
S54
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — Regulatory approach – prescriptive vs. flexible frameworks
S55
What is it about AI that we need to regulate? — Interestingly, some speakers noted that clear regulatory guidance can actually accelerate innovation. Eltjo Poort inWS #…
S56
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Artificial intelligence (AI) has the potential to drive innovation across sectors, but it also poses challenges in terms…
S57
Global AI Policy Framework: International Cooperation and Historical Perspectives — – Alexandra Baumann- Lucia Velasco Building on existing institutions rather than creating entirely new frameworks Kurb…
S58
How Trust and Safety Drive Innovation and Sustainable Growth — And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but a…
S59
How Trust and Safety Drive Innovation and Sustainable Growth — So it increases transparency. It tells you, you know, which algorithm or which system this might have come from. It help…
S60
Do we really need specialised AI regulation? — The pyramid reveals a clear pattern:most layers of AI are already regulated. Hardware is controlled, data is protected (…
S61
Artificial Intelligence &amp; Emerging Tech — The need for new mechanisms to safeguard data, in addition to consent, is becoming increasingly important There is a gr…
S62
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S63
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — The existing protection systems were acknowledged as generally effective but in need of improvement. Participants called…
S64
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S65
AI Meets Cybersecurity Trust Governance &amp; Global Security — Maria stresses that identifying AI‑related harms requires joint effort from governments, civil society, and industry, no…
S66
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S67
From principles to practice: Governing advanced AI in action — Chris argues that some AI risks require entirely new risk management approaches because they can appear very quickly at …
S68
Policymaker’s Guide to International AI Safety Coordination — Russell argues that global coordination on AI safety is essential because the potential harms, whether psychological dam…
S69
Advancing Scientific AI with Safety Ethics and Responsibility — “Thank you so much for your wonderful insights i really enjoyed this session as a researcher in safety of ai at the univ…
S70
Who Watches the Watchers Building Trust in AI Governance — Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at…
S71
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Legal and regulatory | Development | Cybersecurity Recognition that trust and safety practices must adapt to new techno…
S72
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S73
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S74
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more …
S75
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — The discussion maintained a consistently collaborative and solution-oriented tone throughout. While panelists acknowledg…
S76
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S77
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers were solu…
S78
WS #278 Digital Solidarity &amp; Rights-Based Capacity Building — The overall tone was collaborative and solution-oriented, with panelists offering constructive ideas and acknowledging c…
S79
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — The discussion maintained a collaborative and constructive tone throughout, with panelists openly sharing both successes…
S80
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S81
Part 7: ‘Converging realities: Embedding governance through digital twins’ — These are not hypothetical questions; they point to a growing gap between how governance is implemented through technolo…
S82
About the Authors — A competitive market is, of course, the most dynamic governor of marketplace conduct, and there is a broad consensus tha…
S83
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is tha…
S84
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S85
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S86
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S87
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S88
MahaAI Building Safe Secure &amp; Smart Governance — Mr. Virendra Singh opened by emphasizing that the key question is not whether AI will shape governance, but whether gove…
S89
Global Perspectives on Openness and Trust in AI — This panel discussion at an AI summit explored the concept of “openness” in artificial intelligence development and gove…
S90
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — “And then the third area that we talked about was this notion of a trust deficit.”[49]. “as a result of the absence of t…
S91
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Thank you actually i will go murli sir and the rbi free ai report or the framework on uh you know any enablement of ethi…
S92
Keynotes — This comment represented a turning point in the discussion, shifting from accepting the regulation-innovation tension as…
S93
Why science metters in global AI governance — Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Than…
S94
Global Enterprises Show How to Scale Responsible AI — Gurnani argues that trustworthy AI is fundamentally about end-user confidence, requiring three key enablers: security as…
S95
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic Regulation and innovation must work together, not in opposition Regulatory Framework a…
S96
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — Summit, the release of AI, so on, models and so on. Other aspect which is very important, as you rightly said, the risk …
S97
UK seeks alignment with EU on AI policy framework and copyright issues — As part of a warmup of relations on science and technology, Jonathan Berry, the UK’s AI minister, used positive language…
S98
Proposed GDPR changes target AI development — The European Commission hasproposedchanges to the GDPR and the EU AI Act as part of its Digital Omnibus Package, seeking…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Trevor Hughes
2 arguments143 words per minute2428 words1015 seconds
Argument 1
The “deregulatory” mood coexists with pervasive trust‑and‑safety messaging
EXPLANATION
Trevor observes that while many stakeholders claim a deregulatory climate, the event’s signage and messaging repeatedly emphasize trust, safety and privacy. He questions whether this reflects a genuine retreat from regulation or simply a quieter discussion of guardrails.
EVIDENCE
He notes that “there clearly is a deregulatory mood in the air” yet “just about every second banner or poster… had trust, safety or privacy as part of the messaging” and asks whether we are truly in a deregulatory moment or merely silent about needed guardrails [27-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between deregulation and safety guardrails is highlighted in discussions of regulatory challenges for AI agents, noting both safety concerns and worries about over-regulation [S14]. The summit’s trust-and-safety messaging is also documented [S1].
MAJOR DISCUSSION POINT
Adequacy of existing regulatory frameworks
AGREED WITH
Denise Wong, Alexandra Reeve Givens, Amanda Craig, John Edwards
Argument 2
The tension between deregulation sentiment and the need for guardrails reflects a complex regulatory moment
EXPLANATION
Trevor frames the current environment as contradictory: a push for deregulation alongside strong calls for trust and safety. He suggests that this paradox creates an “odd moment” for policymakers and industry alike.
EVIDENCE
He describes the dichotomy of “at one moment we are saying it’s a deregulatory mode… at the same time we are actively embracing… trust and safety” and asks whether we truly care or are simply quiet about guardrails [27-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions describe an “odd moment” where deregulation sentiment meets strong calls for safety, echoing the paradox noted by Trevor [S14][S1].
MAJOR DISCUSSION POINT
Prospective (agile) versus prescriptive regulation
A
Alexandra Reeve Givens
5 arguments190 words per minute1283 words403 seconds
Argument 1
Trust drives adoption and economic sustainability
EXPLANATION
Alexandra argues that long‑term business and societal sustainability of AI depends on users’ trust. Without trust, adoption stalls, limiting the economic benefits of AI technologies.
EVIDENCE
She explains that “the long-term success and sustainability of AI… is business sustainability for the companies, as well as societal sustainability” and that “for people to adopt this technology, they need to trust it,” which she calls “the fuel of innovation” [53-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is emphasized as essential for technology adoption, with statements that people won’t use technology they don’t trust [S1][S3].
MAJOR DISCUSSION POINT
Trust and safety as the engine for AI innovation
AGREED WITH
Trevor Hughes, Amanda Craig, John Edwards, Denise Wong
Argument 2
Thoughtful regulation can enable trust and market uptake
EXPLANATION
She contends that well‑designed regulation can act as a catalyst for innovation by reducing the burden on individual users and firms to assess trust themselves. Product‑liability‑type laws can outsource trust‑building to regulators.
EVIDENCE
Alexandra notes that “responsible, thoughtful regulation can be fuel for innovation” and likens it to “product liability and good laws… help make sure they outsource some of that work for us” so users don’t have to decide trust on their own [68-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulation is framed as an enabler of trust and adoption, illustrated by the “car analogy” and remarks that thoughtful regulation fuels innovation [S1][S3].
MAJOR DISCUSSION POINT
Trust and safety as the engine for AI innovation
Argument 3
Existing laws are hard to enforce without transparency, e.g., discrimination in hiring
EXPLANATION
Alexandra illustrates that current anti‑discrimination statutes are difficult to apply when AI systems make opaque decisions. She calls for a transparency and disclosure regime to make existing laws effective.
EVIDENCE
She describes U.S. equal-employment laws and explains that “when it’s AI-powered software… it is really hard… to prove your case” without “some type of disclosure regime that requires transparency” and impact assessments [154-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “enforcement invisibility” problem is discussed, highlighting difficulties applying anti-discrimination statutes to opaque AI and calling for disclosure regimes [S15][S3].
MAJOR DISCUSSION POINT
Identifying and mitigating AI harms through principles, high‑risk categories, and supply‑chain thinking
Argument 4
Emerging prescriptive elements in the EU AI Act and US state laws show a move toward targeted rules
EXPLANATION
She points out that despite a broader deregulatory vibe, concrete legislative efforts are emerging, such as high‑risk provisions in the EU AI Act and transparency laws in U.S. states. These illustrate a shift toward focused, prescriptive regulation.
EVIDENCE
Alexandra cites “the EU AI Act… looking at high-risk scenarios” and notes similar laws in Colorado, California, New York, and Utah, including “transparency laws” and “regulatory sandboxes” that echo EU provisions [262-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act’s high-risk provisions and transparency obligations are detailed, and US state transparency laws are cited as examples of focused prescriptive regulation [S16][S17][S18][S3].
MAJOR DISCUSSION POINT
Prospective (agile) versus prescriptive regulation
Argument 5
Well‑staffed, independent regulatory bodies and empowered civil‑society watchdogs as a public‑interest safeguard
EXPLANATION
Alexandra emphasizes the importance of robust, independent regulators and civil‑society actors to protect the public interest in AI. She warns that attacks on such bodies undermine trust and safety.
EVIDENCE
She states “well-staffed, empowered, independent regulatory bodies… represent the public interest” and notes that in some countries these institutions are under attack, highlighting the need for “well-resourced, technically informed, independent civil society” [355-356].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for well-staffed, independent regulators and empowered civil-society actors is reiterated in summit commentary [S1][S3].
MAJOR DISCUSSION POINT
Innovative governance tools and concepts with promise for trust and safety
J
John Edwards
5 arguments143 words per minute1144 words477 seconds
Argument 1
Businesses must demonstrate trust regardless of explicit AI law
EXPLANATION
John explains that even without a dedicated AI statute, businesses are still obliged to earn consumer trust through existing data‑protection requirements. Regulators can provide common standards that help firms prove trustworthy practices.
EVIDENCE
He notes that “data protection laws apply… we have a de facto regulatory regime under the UK GDPR” and that “regulation can provide a common standard… privacy by design… impact assessment… are regulatory requirements… that help intermediate between businesses and the consumers” [84-95]. He also adds that the ICO offers oversight to reinforce that trust [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is highlighted as essential for adoption and regulation is seen as providing common standards that help firms prove trustworthy practices [S3][S1].
MAJOR DISCUSSION POINT
Trust and safety as the engine for AI innovation
AGREED WITH
Trevor Hughes, Alexandra Reeve Givens, Amanda Craig, Denise Wong
Argument 2
UK relies on the GDPR as a de‑facto AI regulatory regime
EXPLANATION
John states that the UK’s existing data‑protection framework effectively governs AI activities, filling the gap left by the absence of a specific AI law.
EVIDENCE
He says “the data protection laws apply across the board… we have a de facto regulatory regime under the UK GDPR” [85-87].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK’s reliance on the GDPR as a de-facto AI regulatory framework is explicitly noted [S3].
MAJOR DISCUSSION POINT
Adequacy of existing regulatory frameworks
AGREED WITH
Denise Wong, Alexandra Reeve Givens, Amanda Craig, Trevor Hughes
Argument 3
UK maps AI obligations to GDPR principles via guidance and guidance documents
EXPLANATION
He describes how the ICO translates AI‑related risks into GDPR‑based obligations through guidance, linking AI fairness and risk to established data‑protection principles.
EVIDENCE
John explains that “we’ve gone out and said… the technology-neutral general principles of the GDPR apply when you train a model” and that “we’ve been able to articulate those obligations by way of guidance, linking it back already to the GDPR principles” [103-107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ICO guidance links AI-related risks to GDPR principles, providing concrete mapping of obligations [S20][S3].
MAJOR DISCUSSION POINT
Adequacy of existing regulatory frameworks
Argument 4
Global regulator coordination (ICO, Ofcom, GPA) is essential to fill gaps in fragmented regulation
EXPLANATION
John highlights the need for close cooperation among national regulators and international bodies to address AI risks that cross jurisdictional boundaries, especially when no single authority has full competence.
EVIDENCE
He recounts contacting Ofcom, the Global Privacy Assembly, and coordinating investigations such as the GROK issue, emphasizing “we need to be working very, very closely” and “global regulator coordination… can be tremendously powerful” [274-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Coordination among ICO, Ofcom and international bodies is described as crucial for addressing cross-jurisdictional AI risks [S9][S3].
MAJOR DISCUSSION POINT
Prospective (agile) versus prescriptive regulation
Argument 5
Agency – restoring user control beyond consent, including “delete‑everything” mechanisms
EXPLANATION
John proposes that ‘agency’ should replace consent as the core principle, giving users ongoing control over their data and the ability to withdraw consent or delete their information at any time.
EVIDENCE
He describes agency as “the capacity to recognize… maintain an individual’s agency… you’ve done that now. Where’s my delete-everything button?” and links it to provenance and consent concepts [328-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of ‘agency’ as a replacement for consent, emphasizing ongoing user control and delete-everything mechanisms, is discussed [S3].
MAJOR DISCUSSION POINT
Innovative governance tools and concepts with promise for trust and safety
A
Amanda Craig
4 arguments173 words per minute1045 words361 seconds
Argument 1
Microsoft views trust as prerequisite for any AI deployment
EXPLANATION
Amanda asserts that without trust, AI adoption will not occur, and therefore Microsoft embeds trust‑building governance throughout its product lifecycle.
EVIDENCE
She says “we’re not going to have adoption… without trust” and that “we need governance programs at technology companies… and governance efforts by governments… to ensure an evolving conversation about trust” [112-120].
MAJOR DISCUSSION POINT
Trust and safety as the engine for AI innovation
AGREED WITH
Trevor Hughes, Alexandra Reeve Givens, John Edwards, Denise Wong
Argument 2
Microsoft categorises “sensitive uses” (employment, education, human‑rights, etc.) to focus risk mitigation
EXPLANATION
Amanda explains that Microsoft has defined three categories of “sensitive uses” to prioritize high‑impact AI applications, such as employment decisions, education, and human‑rights contexts.
EVIDENCE
She outlines “the sensitive uses… three categories… employment or education opportunities… risk for psychological or physical harm… impacts human rights” [195-205].
MAJOR DISCUSSION POINT
Identifying and mitigating AI harms through principles, high‑risk categories, and supply‑chain thinking
Argument 3
AI risk must be managed across the entire supply chain, not just at the point of use
EXPLANATION
Drawing on her cybersecurity background, Amanda stresses that AI risk management should consider the whole supply chain, ensuring cohesive governance from data collection to model deployment.
EVIDENCE
She notes “we can learn from… cybersecurity… risk across the whole supply chain… we are oftentimes in the context of AI thinking about risk… at the point of use… difficult is figuring out what do we do across the whole supply chain” [209-218].
MAJOR DISCUSSION POINT
Identifying and mitigating AI harms through principles, high‑risk categories, and supply‑chain thinking
Argument 4
Provenance tools / software‑built materials to track dynamic AI components
EXPLANATION
Amanda highlights provenance‑tracking technologies, likening them to software‑built materials that can record the lineage of AI models and components, thereby enhancing transparency and accountability.
EVIDENCE
She says “provenance tools… software-built materials… allow you to have the ability to track those dynamic components” and that this can be applied to agents [316-322].
MAJOR DISCUSSION POINT
Innovative governance tools and concepts with promise for trust and safety
D
Denise Wong
5 arguments164 words per minute969 words353 seconds
Argument 1
Singapore’s policy aims to create conditions for public‑good AI through trust
EXPLANATION
Denise describes Singapore’s overarching goal of fostering AI that serves the public good, with trust and safety as the desired outcomes, achieved through governance and targeted regulation where needed.
EVIDENCE
She states “trust and safety is the outcome that we want… we want to create the necessary conditions for society to thrive… AI for that public good… we need governance… regulations are a mechanism… we step in where harms are clear… we also use sectoral regulation and horizontal principles” [124-146].
MAJOR DISCUSSION POINT
Trust and safety as the engine for AI innovation
AGREED WITH
Trevor Hughes, Alexandra Reeve Givens, Amanda Craig, John Edwards
Argument 2
Singapore applies sector‑specific regulation for clear harms and uses codes of practice for broader AI issues
EXPLANATION
Denise explains that Singapore directly legislates in areas with obvious risks (e.g., election deepfakes) while employing flexible codes of practice and advisory guidelines for less‑clear AI challenges.
EVIDENCE
She gives the example of “elections regulations… prohibited the use of AI deepfakes” and notes that “for the rest… we leave to sectoral regulations… we have horizontal principles… codes of practice can be updated more easily” [136-148].
MAJOR DISCUSSION POINT
Adequacy of existing regulatory frameworks
Argument 3
Singapore regulates clear‑harm scenarios (election deepfakes) and adopts horizontal principles for other risks
EXPLANATION
Building on the previous point, Denise reiterates that Singapore’s approach combines direct regulation for high‑impact harms with broader, principle‑based frameworks to guide the market.
EVIDENCE
She repeats that “on issues that are very clear… we have stepped in to regulate… example elections… we also have laws for AI creating online harms… for the rest we have sectoral regulations and horizontal principles” [136-146].
MAJOR DISCUSSION POINT
Identifying and mitigating AI harms through principles, high‑risk categories, and supply‑chain thinking
Argument 4
Singapore’s outcome‑driven PDPA and flexible codes of practice illustrate agile governance
EXPLANATION
Denise highlights that Singapore’s Personal Data Protection Act (PDPA) is purpose‑oriented rather than prescriptive, allowing regulators to issue advisory guidelines and codes of practice that can be updated quickly.
EVIDENCE
She notes “our PDPA is very not prescriptive… outcome driven… most of the guidance… is done in advisory guidelines… codes of practice can be updated more easily” [250-259].
MAJOR DISCUSSION POINT
Prospective (agile) versus prescriptive regulation
Argument 5
Privacy‑enhancing technologies such as federated learning to protect data while enabling AI
EXPLANATION
Denise points to federated learning as a concrete privacy‑enhancing technology that allows AI models to be trained without centralising raw data, illustrating how technology can complement regulation.
EVIDENCE
She says “two years ago we were looking at federated learning… now it’s being done in production” and adds that “sometimes the law cannot solve the problem, but technology can” [349-351].
MAJOR DISCUSSION POINT
Innovative governance tools and concepts with promise for trust and safety
Agreements
Agreement Points
Trust is essential for AI adoption and innovation
Speakers: Trevor Hughes, Alexandra Reeve Givens, Amanda Craig, John Edwards, Denise Wong
Trust drives adoption and economic sustainability Microsoft views trust as prerequisite for any AI deployment Businesses must demonstrate trust regardless of explicit AI law Singapore’s policy aims to create conditions for public‑good AI through trust
All panelists agreed that trust is the fuel for AI adoption; without trust users and businesses will not adopt AI, making it a prerequisite for innovation and economic sustainability [45-46][56-68][112-115][84-95][128-130].
POLICY CONTEXT (KNOWLEDGE BASE)
The centrality of trust for AI uptake is highlighted in multiple discussions, noting that trust and safety drive innovation and sustainable growth in AI ecosystems [S35][S36][S50][S51].
Existing data‑protection and sectoral regulations can serve as a de‑facto AI regulatory regime
Speakers: John Edwards, Denise Wong, Alexandra Reeve Givens, Amanda Craig, Trevor Hughes
UK relies on the GDPR as a de‑facto AI regulatory regime Sector‑specific regulation for clear harms and horizontal principles for broader AI issues Emerging prescriptive elements in the EU AI Act and US state laws show targeted rules Opportunity for lots of existing regulation to be applied to AI EU AI Act being dialed back but still part of regulatory structures
Panelists concurred that current data-protection frameworks (e.g., GDPR) and sector-specific laws can be leveraged to govern AI, filling gaps left by the absence of a dedicated AI statute [84-87][142-148][262-270][112-115][26-28].
POLICY CONTEXT (KNOWLEDGE BASE)
Existing data-protection frameworks are seen as capable of covering AI risks, with analyses showing compatibility of AI with data-protection law and calls to build on current institutions rather than create new regimes [S56][S57].
Preference for agile, prospective regulation (codes of practice, sandboxes) over heavy prescriptive law
Speakers: Trevor Hughes, Denise Wong, Alexandra Reeve Givens, Amanda Craig, John Edwards
The “deregulatory” mood coexists with pervasive trust‑and‑safety messaging Outcome‑driven PDPA and flexible codes of practice illustrate agile governance Emerging prescriptive elements but need for agile mechanisms like sandboxes Microsoft sees need for evolving governance models that complement existing regulation Guidance mapping AI obligations to GDPR principles provides a flexible approach
All agreed that regulation should be agile and outcome-driven, using codes of practice, sandboxes and guidance rather than rigid, prescriptive statutes [27-40][250-259][262-270][112-120][103-107].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory sandboxes are promoted as dialogue-based, agile tools for exploring uncertain AI areas, and industry voices favour voluntary standards over rigid statutes [S41][S43][S42][S54].
Transparency and provenance tools are needed to enable trust and enforce existing laws
Speakers: Alexandra Reeve Givens, Amanda Craig, John Edwards, Denise Wong
Existing laws are hard to enforce without a transparency layer Provenance tools / software‑built materials can track dynamic AI components Regulation can provide common standards and oversight to demonstrate trust Horizontal principles and proto‑standards create market‑driven assurance
Panelists agreed that transparency mechanisms-such as disclosure regimes, provenance tracking, and industry standards-are essential for building trust and making existing legal frameworks effective [150-162][316-322][88-95][144-147].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for transparency, provenance and traceability mechanisms are recurrent, emphasizing their role in building trust and overcoming enforcement invisibility in AI governance [S35][S36][S37].
Coordination across regulators and global cooperation is essential to fill gaps in AI governance
Speakers: John Edwards, Denise Wong, Trevor Hughes
Global regulator coordination is essential to address fragmented regulation Sectoral regulation and horizontal principles require coordinated oversight Presence of multiple regulators on the panel highlights need for cooperation
All agreed that close cooperation among national regulators, international bodies and sectoral actors is crucial to manage AI risks that cross jurisdictional boundaries [274-300][124-126][9-10].
POLICY CONTEXT (KNOWLEDGE BASE)
Panelists stress the need for activity-based, cross-jurisdictional oversight and global regulatory coordination to address fragmented AI governance landscapes [S40][S44][S57].
Similar Viewpoints
Both argue that well‑designed regulation is not a barrier but an enabler of AI innovation by reducing the burden on firms and users to assess trust themselves [68-70][84-95].
Speakers: Alexandra Reeve Givens, John Edwards
Thoughtful regulation can be fuel for innovation Regulation provides common standards that help businesses demonstrate trust
All emphasize a holistic, supply‑chain‑wide approach to AI risk management, requiring coordination among regulators, industry and sectoral bodies [209-218][274-300][142-148].
Speakers: Amanda Craig, John Edwards, Denise Wong
AI risk must be managed across the whole supply chain Global regulator coordination is needed to address fragmented oversight Horizontal principles and sector‑wide frameworks support supply‑chain governance
They concur that flexible, outcome‑oriented tools (codes of practice, sandboxes, guidance) are more suitable for the fast‑evolving AI landscape than rigid legislation [250-259][262-270][103-107].
Speakers: Denise Wong, Alexandra Reeve Givens, Amanda Craig, John Edwards
Agile, outcome‑driven regulatory mechanisms (codes of practice, sandboxes) are preferable Sectoral regulation for clear harms, flexible codes for broader issues Guidance mapping AI to existing data‑protection principles
Unexpected Consensus
Despite a proclaimed deregulatory mood, all speakers agree that some form of regulation or governance is still necessary
Speakers: Trevor Hughes, Alexandra Reeve Givens, Amanda Craig, John Edwards, Denise Wong
Deregulatory mood coexists with trust‑and‑safety messaging Regulation can be fuel for innovation Microsoft sees need for governance programs Businesses must demonstrate trust via existing regulation Singapore uses regulation where clear harms exist
While the opening remarks highlight a deregulatory atmosphere, every panelist stresses the continued need for regulatory or governance mechanisms-whether through existing data-protection law, sectoral codes, or new agile tools-making this alignment surprising [27-40][68-70][112-115][84-95][124-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Even amid talk of deregulation, there is consensus that governance mechanisms remain indispensable, reflecting observations from multi-stakeholder AI governance sessions [S44][S46][S55].
Consensus that privacy‑enhancing technologies (e.g., federated learning) can complement or substitute regulation
Speakers: Denise Wong, Alexandra Reeve Givens
Privacy‑enhancing technologies such as federated learning can address data protection challenges Transparency and disclosure regimes are needed to make existing laws effective
Denise highlights federated learning as a technical solution, while Alexandra stresses transparency to enforce laws; both converge on the idea that technology can fill gaps where law alone may fall short [349-351][150-162].
POLICY CONTEXT (KNOWLEDGE BASE)
Privacy-enhancing technologies are highlighted as viable technical complements to regulatory approaches, with PETs discussed as trust-building tools in AI governance forums [S47].
Overall Assessment

The panel shows strong convergence on four core themes: (1) trust is the cornerstone of AI adoption; (2) existing data‑protection and sectoral regulations can be leveraged as a de‑facto AI regime; (3) agile, outcome‑driven regulatory tools (codes of practice, sandboxes, guidance) are preferred over rigid statutes; (4) transparency, provenance and coordinated global oversight are essential for effective governance.

High consensus – the speakers, representing regulators, industry and civil society, largely agree on the necessity of trust‑centric, flexible governance, indicating a shared direction for future AI policy development.

Differences
Different Viewpoints
Whether existing regulatory frameworks are sufficient to ensure trust and safety in AI
Speakers: Trevor Hughes, John Edwards, Amanda Craig, Denise Wong
Trevor observes a deregulatory mood yet pervasive trust-and-safety messaging, questioning if guardrails are being ignored [27-40] John asserts that the UK GDPR provides a de-facto AI regulatory regime and that there is no lacuna needing new AI-specific law [84-88][107-108] Amanda states that Microsoft relies on internal responsible-AI governance and sees opportunity for governments to develop additional regulation that complements existing rules [112-119] Denise explains Singapore’s approach of regulating clear-harm scenarios while using sector-specific regulations and agile codes of practice for broader AI issues, implying current frameworks are not enough [136-148][250-259]
Trevor questions whether the apparent deregulatory climate truly leaves a gap in AI guardrails; John argues the existing GDPR-based regime already fills that gap; Amanda suggests that while internal governance is essential, complementary regulation may still be required; Denise favors targeted regulation for obvious harms and flexible, principle-based tools for other risks, indicating that existing laws alone are insufficient. [27-40][84-88][107-108][112-119][136-148][250-259]
POLICY CONTEXT (KNOWLEDGE BASE)
Debate persists on the adequacy of current laws, with some experts questioning whether existing frameworks can fully guarantee AI safety and trust [S44][S52].
Prescriptive versus agile (prospective) regulatory approaches for AI
Speakers: Trevor Hughes, Alexandra Reeve Givens, Denise Wong, Amanda Craig, John Edwards
Trevor frames the moment as needing prospective, agile regulation rather than heavy-handed prescriptive rules [221-235] Alexandra points to emerging prescriptive elements such as high-risk provisions in the EU AI Act and US state transparency laws, showing a move toward targeted rules [262-270] Denise highlights Singapore’s outcome-driven PDPA and the use of advisory codes of practice that can be updated quickly, favoring less prescriptive mechanisms [250-259] Amanda mentions regulatory sandboxes as flexible tools that give space for innovation while providing some regulatory guidance [269-271] John relies on guidance that maps AI obligations onto existing GDPR principles, a form of indirect prescriptive guidance [103-107]
Trevor calls for agile, prospective regulation; Alexandra notes concrete prescriptive measures emerging in the EU and US; Denise advocates for outcome-driven legislation supplemented by adaptable codes of practice; Amanda sees sandboxes as a middle ground; John emphasizes guidance that translates existing GDPR rules to AI contexts. The panel therefore diverges on how much prescriptiveness is appropriate. [221-235][262-270][250-259][269-271][103-107]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between outcome-based, flexible regulation and traditional prescriptive rules is a recurring theme, illustrated by discussions on outcome-based models and flexible frameworks [S53][S54][S42].
Unexpected Differences
Perception of a ‘deregulatory mood’ versus the panel’s view of active regulatory coordination
Speakers: Trevor Hughes, John Edwards, Denise Wong
Trevor highlights a deregulatory sentiment while noting ubiquitous trust-and-safety messaging [27-40] John claims the absence of a specific AI law is not a deficit because the UK GDPR already provides a comprehensive regime [107-108] Denise stresses the need for coordinated codes of practice and sector-specific regulations to address clear harms, suggesting ongoing regulatory activity [236-242][250-259]
Trevor’s framing of a deregulatory moment seems at odds with John’s confidence that existing GDPR suffices and Denise’s call for coordinated regulatory tools, revealing an unexpected split on how much new regulation is actually needed. [27-40][107-108][236-242][250-259]
POLICY CONTEXT (KNOWLEDGE BASE)
Contrasting views on a deregulatory climate versus active coordination are noted, with panels emphasizing ongoing collaborative regulatory activity despite deregulatory rhetoric [S44][S40].
Overall Assessment

The panel shows strong consensus that trust and safety are essential for AI adoption, but diverges on whether current regulations (e.g., GDPR) are adequate or whether new, possibly prescriptive, rules are required. Disagreements focus on the sufficiency of existing frameworks, the balance between prescriptive and agile regulation, and the interpretation of a deregulatory climate.

Moderate disagreement: while all agree on the goal of trustworthy AI, they differ on the regulatory path forward, implying that future policy will need to reconcile de‑facto regimes with targeted, flexible governance mechanisms to avoid gaps and ensure global coordination.

Partial Agreements
All speakers agree that trust and safety are prerequisite for AI adoption and innovation, but they differ on the mechanisms—regulation, internal governance, or a mix of both—to achieve that trust. [44-46][56-68][112-119][88-95][128-132]
Speakers: Trevor Hughes, Alexandra Reeve Givens, Amanda Craig, John Edwards, Denise Wong
Trevor uses the 1891 White House story to illustrate that technology will not be used without trust [44-46] Alexandra says trust is the fuel of innovation and essential for adoption and economic sustainability [56-68] Amanda states that AI adoption will not happen without trust and that governance programs are needed to build it [112-119] John notes that regulation can provide common standards that help businesses demonstrate trust to consumers [88-95] Denise defines trust and safety as the desired outcome for AI serving the public good [128-132]
Takeaways
Key takeaways
Trust and safety are essential drivers of AI adoption and economic sustainability; without trust, users will not ‘flip the switch.’ Thoughtful, well‑designed regulation can enable trust and act as a catalyst for innovation rather than a barrier. In jurisdictions lacking a dedicated AI law (e.g., the UK), existing data‑protection regimes (GDPR/UK‑GDPR) are being mapped to AI activities to provide de‑facto oversight. Singapore’s approach blends sector‑specific regulation for clear‑harm cases (e.g., election deep‑fakes) with flexible codes of practice and outcome‑driven guidance for broader AI issues. Transparency is critical; without it, existing anti‑discrimination or privacy laws are hard to enforce in AI‑driven contexts. Risk identification relies on principle‑based frameworks, high‑risk categorisation (employment, education, human‑rights, critical‑infrastructure, etc.), and a supply‑chain view of AI development. Agile governance mechanisms (codes of practice, regulatory sandboxes, outcome‑driven legislation) are preferred over rigid, prescriptive rules at this early stage. Global regulator coordination (ICO, Ofcom, GPA, etc.) is seen as essential to fill gaps and avoid fragmented oversight. Innovative governance tools showing promise include provenance / software‑built material tracking, the concept of “agency” (user control beyond consent), privacy‑enhancing technologies such as federated learning, and well‑staffed independent watchdogs/civil‑society bodies.
Resolutions and action items
Regulators (ICO, Ofcom, GPA) will continue coordinated information‑sharing on emerging AI risks (e.g., the GROK incident). The UK Information Commissioner’s Office will publish guidance mapping AI obligations to GDPR principles to give businesses certainty. Singapore’s PDPC will maintain and update sector‑specific codes of practice and leverage its outcome‑driven PDPA to address new AI harms. Microsoft will keep advancing its Responsible AI Governance programme, focusing on provenance tools and “sensitive‑use” risk categorisation. IAPP will continue to convene multi‑stakeholder dialogues to align industry, regulator, and civil‑society perspectives on trust and safety.
Unresolved issues
How to prospectively define and harmonise AI‑specific harms across diverse cultural and legal contexts without a universal standard. Mechanisms for enforcing transparency in AI decision‑making (e.g., disclosure regimes for high‑risk HR tools) remain under‑developed. The balance between deregulation sentiment and the need for concrete guardrails – which areas require new legislation versus self‑governance? How to effectively manage AI risk across the entire supply chain, especially when components are dynamically updated. The extent to which existing privacy/anti‑discrimination laws can be operationalised for AI without additional technical standards. Long‑term coordination framework among global regulators to avoid fragmented oversight.
Suggested compromises
Leverage existing data‑protection and sector‑specific laws as a de‑facto AI regulatory layer while developing targeted guidance for AI‑specific issues. Adopt flexible, outcome‑driven codes of practice and regulatory sandboxes that can be updated quickly, rather than relying solely on rigid statutes. Combine top‑down regulation for clearly identified harms (e.g., election deep‑fakes) with bottom‑up, market‑driven assurance mechanisms (horizontal principles, provenance standards) for broader AI risks. Encourage industry self‑governance (e.g., Microsoft’s responsible AI program) alongside regulator‑provided standards to share the burden of trust‑building.
Thought Provoking Comments
The event’s banners everywhere talk about trust, safety and privacy, yet there’s a strong deregulatory mood and little discussion of new AI‑specific laws. Are we really deregulating, or just being quiet about the need for guardrails?
Frames the central paradox of the panel – the tension between market‑driven deregulation and the pervasive demand for trust and safety – and forces all participants to confront whether the current regulatory silence is intentional or a gap.
Sets the agenda for the whole discussion, prompting each panelist to justify their stance on regulation versus trust. It leads directly to Alex’s argument that regulation can fuel innovation and to John’s explanation of how existing data‑protection law fills the AI gap.
Speaker: Trevor Hughes (moderator)
Regulation isn’t the opposite of innovation; thoughtful, well‑designed regulation actually *fuels* innovation by providing common standards that let businesses outsource the trust‑building work to regulators.
Challenges the common narrative that regulation stifles progress and reframes it as a catalyst, introducing the idea that legal standards can be a market advantage rather than a burden.
Shifts the conversation from a binary view of regulation vs. innovation to a more nuanced perspective. It influences John to emphasize the GDPR as a de‑facto AI framework and prompts Denise to discuss where regulation is truly needed versus where sectoral rules suffice.
Speaker: Alexandra Reeve Givens (civil‑society)
In the UK we don’t have a specific AI law, but the GDPR provides a de‑facto regulatory regime. We map AI‑specific obligations (e.g., fairness from the EU AI Act) onto existing GDPR principles and issue guidance to give certainty.
Introduces a concrete example of how existing data‑protection law can be leveraged for AI governance, showing that a lack of AI‑specific legislation isn’t necessarily a regulatory vacuum.
Provides a practical model that other panelists reference. Denise cites this approach when describing Singapore’s mix of regulation and sectoral codes, and it reinforces Alex’s point about the need for transparency to make existing laws effective.
Speaker: John Edwards (UK Information Commissioner)
We regulate only where harms are clear (e.g., AI‑generated deepfakes in elections). For the rest we rely on sectoral regulations and develop horizontal principles and proto‑standards, creating a market‑driven assurance ecosystem adjacent to law.
Highlights a hybrid governance model that balances prescriptive regulation for obvious risks with flexible, industry‑led standards for emerging issues, introducing the concept of “agile” regulation.
Expands the discussion beyond the binary of regulation vs. no regulation, prompting Amanda to talk about internal governance frameworks and Alex to raise the transparency problem in AI‑driven hiring.
Speaker: Denise Wong (Singapore PDPC)
AI‑driven hiring tools can violate existing US anti‑discrimination laws, but it’s extremely hard for a job applicant to prove bias because the algorithm’s decision‑making is opaque. We need a disclosure regime and impact assessments to give existing laws teeth.
Provides a concrete, jurisdiction‑specific illustration of how AI obscures legal compliance, underscoring the need for transparency layers that make existing statutes enforceable.
Serves as a turning point that deepens the technical‑legal discussion. It leads John to acknowledge the need for coordinated regulator action, and Denise to stress the importance of codes of practice that can be updated quickly.
Speaker: Alexandra Reeve Givens
From a cybersecurity perspective, the biggest challenge is managing risk across the entire AI supply chain, not just at the point of use. We need to think about governance that spans all components, similar to how we handle software supply‑chain risk.
Introduces the supply‑chain risk lens, a fresh angle that connects AI governance to decades of cybersecurity practice, suggesting a holistic, cross‑layer approach rather than siloed regulation.
Broadens the conversation to include technical implementation challenges, prompting John to discuss coordination between different regulators (ICO, Ofcom) and reinforcing Denise’s point about agile, code‑based governance.
Speaker: Amanda Craig (Microsoft)
‘Agency’ should replace consent as the core principle: it’s not just about front‑end permission but about giving users ongoing control, provenance, and the ability to withdraw or delete data after the fact.
Proposes a paradigm shift in user‑rights framing, moving from a static consent model to a dynamic agency model, which could reshape future regulatory design.
Triggers a reflective exchange where Trevor links agency to burden‑shifting, and Denise acknowledges that technology (e.g., privacy‑enhancing tech) may sometimes solve problems that law cannot, enriching the debate on user empowerment.
Speaker: John Edwards
Overall Assessment

The discussion pivoted around the initial paradox of a deregulatory climate coexisting with a demand for trust and safety. Trevor’s opening question exposed this tension, prompting Alex’s counter‑narrative that regulation can be an engine for growth. John’s concrete example of leveraging GDPR and Denise’s hybrid governance model provided practical pathways, while Alexandra’s hiring‑bias case illustrated why existing laws need transparency mechanisms. Amanda’s supply‑chain perspective and John’s agency concept introduced fresh frameworks that moved the conversation from abstract policy to actionable technical and rights‑based solutions. Collectively, these comments shifted the panel from a superficial debate about regulation vs. innovation to a nuanced exploration of how existing laws, agile standards, transparency, and new rights concepts can together build trustworthy AI ecosystems.

Follow-up Questions
How should regulators investigate and address the emerging GROK issue, especially given its amplification across social media platforms and overlapping regulatory jurisdictions?
Understanding GROK’s impact is crucial for coordinating regulatory responses across data protection, online safety, and content moderation frameworks.
Speaker: John Edwards
What comprehensive taxonomy of AI harms can be developed to standardize identification and mitigation across jurisdictions?
A unified harm taxonomy (e.g., from the International AI Safety Report) would enable consistent regulation, risk assessment, and cross‑border cooperation.
Speaker: Denise Wong
How can AI risk be managed holistically across the entire supply chain rather than at isolated points of use?
Supply‑chain‑wide risk management is essential to prevent gaps where harms could arise in components, data sources, or downstream integrations.
Speaker: Amanda Craig
Do cookie‑consent mechanisms provide a viable model for AI trust and safety tools, and how effective are they in practice?
Evaluating the real‑world impact of consent banners can inform the design of comparable AI transparency and user‑control mechanisms.
Speaker: Trevor Hughes
What is the impact of regulatory sandboxes (e.g., Utah’s AI sandbox) on fostering innovation while ensuring safety, and how can their effectiveness be measured?
Assessing sandbox outcomes helps determine whether they strike the right balance between flexibility for developers and protection for users.
Speaker: Alexandra Reeve Givens
What coordination mechanisms are needed between overlapping regulators (e.g., ICO, Ofcom, Global Privacy Assembly) to address cross‑jurisdictional AI issues?
Effective inter‑agency collaboration prevents regulatory gaps and duplicated effort when AI systems fall under multiple legal regimes.
Speaker: John Edwards
What transparency and disclosure regimes are required for AI systems to enable enforcement of existing laws such as anti‑discrimination statutes?
Without clear disclosures, it is difficult to prove violations of current legislation, undermining legal remedies for affected individuals.
Speaker: Alexandra Reeve Givens
Why has the EU AI Act not generated a Brussels effect in AI regulation worldwide, and what factors limit its global diffusion?
Understanding barriers to international adoption can guide future policy design to achieve broader harmonization.
Speaker: Trevor Hughes
How can agile governance tools (e.g., outcome‑driven legislation, codes of practice) be structured to keep pace with rapidly evolving AI technologies?
Less prescriptive, adaptable mechanisms may provide timely safeguards without stifling innovation.
Speaker: Denise Wong
What role can privacy‑enhancing technologies such as federated learning play in mitigating AI‑related privacy risks, and how can their deployment be encouraged?
Technical solutions may fill gaps where law lags, offering practical ways to protect personal data in AI training and inference.
Speaker: Denise Wong
How can the concept of ‘agency’ be operationalized in AI products to restore user control beyond traditional consent models?
Shifting focus to agency could empower users with ongoing rights (e.g., data deletion, opt‑out) and improve trust.
Speaker: John Edwards
What are the design and implementation challenges of provenance tools (e.g., software‑built materials) for tracking dynamic AI components, and how can they enhance accountability?
Provenance mechanisms can increase transparency about model origins and updates, supporting both regulators and users in assessing trustworthiness.
Speaker: Amanda Craig

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

IndoGerman AI Collaboration Driving Economic Development and Soc

IndoGerman AI Collaboration Driving Economic Development and Soc

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with Anandi Iyer highlighting that the Indo-German MOU on innovation and AI is already being implemented and that today’s speakers would explore concrete value-propositions for both nations [1-2]. Georg Enzweiler then framed AI’s economic promise-potentially adding $5-15 trillion to global GDP by 2030-but warned that growth must be inclusive, climate-friendly and supported by Germany’s AI “lighthouses” and the newly launched India-Germany IA Pact covering industry, talent and social good [36-45][54-55].


Thomas Kuhn of Fraunhofer described the institute’s focus on trustworthy and industrial AI, emphasizing data-space technology for secure, rule-based data sharing and the creation of “virtual colleagues” that preserve organisational knowledge with uncertainty-aware responses [72-84][98-106][112-118]. Rajkumar Upadhyay outlined India’s rapid AI expansion-ranking third globally in talent, attracting $2 billion in investment and projecting $1.7 trillion economic impact-while proposing joint work on smart manufacturing standards, agricultural productivity, cybersecurity, quantum communication and large-scale fraud-detection platforms [133-144][149-161][164-184][190-199][202-209].


Anandi stressed that collaboration is co-creation rather than technology transfer, inviting German delegates to witness CDOT’s capabilities firsthand [210-214]. Kristina Sinemus presented Germany’s AI strategy built on the principle that technology must serve people, noting a €60 billion fund supporting 170 startups in health, agriculture and AI-quality testing, and the establishment of an AI innovation lab and a trust-testing hub [236-244][250-267][270-276].


The CEOs’ panel illustrated complementary strengths: Bosch’s deployment of AI in products and its transformation of engineering practices [300-305][306-311]; SAP’s responsibility to embed AI responsibly across core processes with explainability and compliance [317-324]; Aumavio’s AI-driven gains in R&D efficiency, quality assurance and automotive innovations [334-354]; and Mercedes-Benz’s early adoption of AI in vehicles since 2019 and plans for operational efficiency [368-376]. The discussion concluded that Germany’s precision engineering and India’s scale, under shared democratic values, create a unique opportunity for a sustainable, inclusive Indo-German AI partnership [136-138][272-274][276-279].


Keypoints


Major discussion points


Strategic Indo-German AI partnership – The summit is framed as a launchpad for deeper collaboration, highlighting existing MOUs, the new India-Germany IA Pact, and the shared ambition to make AI a driver of inclusive economic growth and social welfare [1][28-33][54-56].


Trustworthy AI and data-space infrastructure – Fraunhofer stresses the need for reliable, explainable AI and secure, rule-based data sharing (data spaces) to enable industry-scale AI while protecting sensitive corporate data [72-84][101-108][115-120]. German policy support (AI “lighthouses”, funding programmes, AI quality-testing hub) is presented as a model for building trustworthy AI ecosystems [46-48][241-268].


Sector-specific collaboration opportunities – Both speakers and panelists identify concrete domains where joint work can add value:


* Smart manufacturing & industry 4.0 – standards, cross-border data flows, energy efficiency [149-154];


* Agriculture – AI-driven yield improvement, early disease detection via satellite data [156-163][257-263];


* Healthcare – AI-assisted diagnostics, robotics for mobility-impaired patients [245-252][226-230];


* Cybersecurity, telecom & quantum – real-time attack detection on massive data streams, quantum-ready encryption, digital-intelligence platforms for fraud prevention [158-170][172-176][190-199].


Industry leaders’ practical concerns – CEOs from Bosch, SAP, Mercedes-Benz and others discuss the “night-time” worries of integrating AI into legacy processes, ensuring compliance, explainability, and managing the paradigm shift in engineering and software development [298-305][311-324][334-350]. They stress the need for responsible rollout that balances innovation with customer trust.


Inclusive, sustainable AI aligned with democratic values – The summit repeatedly returns to the theme that AI must serve people, not the other way round, and that growth should be equitable, climate-conscious, and governed by transparent regulations [39-42][272-278][236-240].


Overall purpose / goal


The gathering is intended to cement and expand the Indo-German AI collaboration by (i) showcasing existing research and industry capabilities, (ii) outlining concrete joint initiatives across key sectors, (iii) establishing frameworks for trustworthy, secure AI deployment, and (iv) mobilising public-private partnerships and funding to translate AI research into societal and economic benefits for both nations [1][28-33][54-56][241-268].


Overall tone


The conversation begins with a formal, celebratory tone, emphasizing partnership achievements and future optimism. It then shifts to a technical, solution-focused tone as experts detail AI trustworthiness, data-space technology, and sectoral use-cases. Mid-session the tone becomes pragmatic and candid, with CEOs voicing operational anxieties and responsibility concerns. Throughout, the tone remains constructively collaborative and forward-looking, underscored by repeated references to inclusive, sustainable outcomes.


Speakers

Prashant Doreswamy – CEO of Averior (formerly Continental); expertise in automotive technology, AI-driven quality improvement, R&D efficiency, and AI applications in manufacturing and mobility. [S1][S2]


Dr. Thomas Kuhn – Head of the Division of Embedded Systems, Fraunhofer IESE; expertise in trustworthy AI, industrial AI, data spaces, and AI integration for industry, healthcare, logistics, and energy. [S4]


Anshuman Awasthi – CTO of Mercedes-Benz Research and Development Center (Bangladesh); expertise in automotive AI, operational efficiency, and AI-enabled vehicle technologies. [S6][S7]


Dr. Rajkumar Upadhyay – CEO, Centre for Development of Telematics (CDOT); expertise in telecommunications, AI for cybersecurity, quantum communication, fraud detection, 5G/6G, smart manufacturing, and AI-enabled agriculture. [S8][S9]


Anandi Iyer – Head of Fraunhofer in India (18 years); moderator and champion of Indo-German AI collaboration; expertise in research-to-market innovation ecosystems. [S11]


Dattatri Salagame – Senior executive, Robert Bosch Software Solutions; expertise in AI-driven products (autonomous driving, AI cockpit, AI in healthcare) and AI-enabled engineering processes. [S12][S13]


Georg Enzweiler – Guest speaker delivering the special address; senior German representative on AI policy and Indo-German cooperation. [S15]


Prof. Dr. Kristina Sinemus – Minister for Digitalisation and Innovation, Germany; expertise in AI strategy, trustworthy AI, public-private AI funding, and AI applications in health, agriculture, and industry. [S16]


Sindhu Gangadharan – Senior executive (CEO) at SAP; expertise in enterprise application software, AI integration across core business processes, responsible AI, and explainable AI. [S18]


Additional speakers:


Dr. Padhya – Mentioned as present at the event (no speaking segment recorded).


Mr. Morley – Referenced at the close of the session (no speaking segment recorded).


Tim – Briefly addressed by the moderator (no substantive speech recorded).


Atatürk – Name invoked by the moderator when calling participants (no speech recorded).


Full session reportComprehensive analysis and detailed insights

The summit opened with Anandi Iyer celebrating the rapid implementation of the Indo-German memorandum of understanding on innovation and AI, noting that many of the activities outlined in the MOU had already been “kick-started” [1-2]. She introduced the day’s speakers, highlighting Dr Padhya’s role in shaping a joint value-proposition, and welcomed Dr Thomas Kuhn from the Fraunhofer Institute of Experimental Software, who would discuss AI competencies in workplace transformation, manufacturing, agriculture and health [3-5]. Iyer then announced a “power-packed industry captain session” featuring four Bangalore-based CEOs – Sindhu Gangadharan (SAP), Anshuman Awasthi (Mercedes-Benz), Dattatri Salagame (Robert Bosch Software Solutions) and Prashant Doreswamy (Aumavio) – and introduced Murali Nair of the Bertelsmann Stiftung, which has long promoted Indo-German knowledge exchange [9-18]. She concluded by underscoring Fraunhofer’s 18-year presence in India, its extensive research network, and its €70 million earnings from Indian contracts, positioning the institute as a “first mover” in the Indian innovation corridor [20-22]. After a brief procedural note, she invited Mr Georg Enzweiler to deliver the special address [26-27].


Georg Enzweiler opened with a light-hearted remark that the audience were “big fans of Karnova” and that he hoped for “more Anandi ears” [36-38]. He framed the summit’s theme around inclusive, climate-friendly AI growth, reminding the audience that AI could add between $5 trillion and $15 trillion to global GDP by 2030, but that the benefits must be “inclusive” and must avoid harming people or the planet [36-42]. He cited Germany’s “AI lighthouses” – more than 60 projects funded since 2020 that address wildfire prevention, renewable energy, biodiversity and the circular economy – as a model for sustainability-oriented research [44-47]. Enzweiler then highlighted the newly launched India-Germany IA Pact, describing it as an “implementation-driven collaboration across government, industry, research, skill development and innovation” [54-56]. He also noted India’s ambition to build massive computing capacity powered by renewable energy, reinforcing the summit’s green-technology focus [48-50]. Finally, he pointed out that India enjoys the highest AI-skill penetration rate globally [50-52].


Dr Thomas Kuhn (Fraunhofer IESE) explained the institute’s focus on “augmented intelligence”, i.e. AI that keeps human expertise at its core [62-63]. He identified three technical pillars: (i) trustworthy AI, achieved through an “uncertainty wrapper” that attaches a confidence score to every model output; (ii) preservation of organisational knowledge via “virtual colleagues” that learn from retiring experts and retain tacit expertise; and (iii) secure, rule-based data-space infrastructure that enables high-throughput AI training while protecting sensitive corporate data – the Fraunhofer Raumhofer platform can handle up to 10 000 transactions per second [106-118]. Kuhn also described Fraunhofer’s alliance of more than 30 institutes covering life sciences, logistics, production, energy, smart grids and security, all contributing to best-practice AI strategies [86-92]. He concluded that trustworthy, industrial AI can drive a “welfare for all” transformation, echoing the summit’s motto [95-96].


Dr Rajkumar Upadhyay (CDOT) presented India’s rapidly expanding AI ecosystem. He noted that India ranks third globally in AI talent and R&D, hosts a $2 billion AI investment programme, and has distributed 38 000 GPUs to startups, projecting a $1.7 trillion contribution to the economy by 2035 [136-141]. He contrasted this with Germany’s AI market, expected to reach €30 billion by 2030, and argued that the complementary strengths of German precision engineering and Indian scale create a “win-and-win” partnership [143-148]. Upadhyay outlined concrete collaboration areas: (i) smart manufacturing standards, cross-border industrial data flows and energy-efficiency measures [153-155]; (ii) AI-enhanced agriculture to boost yields and farmer incomes [156-163]; (iii) cybersecurity capable of processing 10 TB s⁻¹ and detecting attacks in real time [158-162]; (iv) joint quantum-communication research, noting his role as chair of the National Quantum Communication Hub together with IIT Madras [172-179]; and (v) large-scale fraud-prevention platforms such as SanchalSathi.gov.in, a digital-intelligence system that links telecom, banks and police to flag spoofed calls within 5 ms and block risky transactions [190-194]. He also highlighted India’s recent rollout of 4G/5G in September 2025, the existence of roughly 170 000 base stations generating 5 petabytes of data, and the country’s early planning for 6G in collaboration with global partners [172-176]. Upadhyay closed by urging that AI should “strengthen inclusion, productivity and resilience” and that the existing Fraunhofer-India MOU should be deepened [202-209].


Anandi Iyer then reinforced the co-creation model, inviting participants to visit the German Pavilion where AI use-cases in agriculture, medicine and manufacturing were on display, and stressing that the partnership is not a one-sided technology transfer but a joint effort to solve India-specific challenges [210-214]. She introduced Professor Kristina Sinemus, noting the significance of a woman leading innovation and digitalisation in Germany [215-218].


Prof Dr Kristina Sinemus (German Ministry of Innovation) articulated Germany’s AI strategy on the principle that “technology must serve people, not the other way around” [236-238]. She announced an “AI-made Innocent” agenda and a funding programme of over $60 billion that has already supported 170 startups, citing examples such as a $1.8 million grant for a load-bearing robotic wheelchair that enables mobility-impaired users to navigate stairs [245-250] and the RISCA project, an AI-driven cardiology decision-support system for early disease detection [253-255]. In agriculture, Sinemus described satellite-based AI models that detect plant stress early, reducing pesticide use and improving sustainability [258-263]. She also announced an AI innovation lab at Hessian AI, co-funded by her ministry, offering high-performance computing and advisory services to SMEs, and a public-private AI quality-testing hub that translates “trustworthy AI” from slogan to measurable criteria [264-269]. Sinemus linked these initiatives to the IA Pact’s emphasis on democratic values, inclusive growth and “AI for social good” [272-274].


The CEOs’ panel reflected industry perspectives. Sindhu Gangadharan (SAP) stressed that, with 87 % of global businesses using SAP, the company must embed AI responsibly across core processes, providing explainability, transparency, fairness and auditability, especially as autonomous workflows become more common [317-324]. Dattatri Salagame (Bosch) explained that Bosch is deploying AI in autonomous driving, AI-cockpits and healthcare devices, while also using AI to “disrupt long-held beliefs of how we build software”, a shift that creates “anxiety” about new engineering paradigms [300-305]. Prashant Doreswamy (Aumavio) highlighted AI-driven gains of over 20 % in R&D efficiency, quality improvement through AI-enhanced inspection, and automotive innovations such as AI-powered camera vision, an e-travel companion and driver-assist features, all showcased at the German stand [334-354]. Anshuman Awasthi (Mercedes-Benz) noted that Mercedes was the first automotive firm to embed AI in vehicles in 2019 and is now using AI to enhance operational efficiency, with the original 2019 AI system developed by NBRDI in Bangalore [364-371][385-386]. The discussion exposed divergent views: Bosch’s description of AI adoption as a paradigm shift that generates “anxiety” [300-305] contrasted with Mercedes-Benz’s assertion that “AI is not a challenge” for the company [364-371].


Across the summit, senior officials – Anandi Iyer, Georg Enzweiler and Rajkumar Upadhyay – agreed that the Indo-German AI effort should be a co-creation partnership rather than a simple technology hand-over [212-214][54-56][205-206]. Consensus on trustworthy AI was evident in Fraunhofer’s uncertainty wrappers and data-space governance [106-118], Germany’s AI quality-testing hub [264-269], Bosch’s safety-critical requirements [300-304] and SAP’s demand for explainability and auditability [317-324]. All speakers highlighted AI’s sectoral impact: Georg cited transformative roles in production, health and food systems [228-230]; Rajkumar enumerated joint work in smart manufacturing, agriculture and cybersecurity [149-161]; Thomas listed Fraunhofer’s applications across life sciences, logistics and energy [87-90]; and Anandi pointed to live demonstrations at the German Pavilion [129-131]. Agreement also existed on the need for skill development – the IA Pact’s focus on training [48-49], Germany’s over $60 billion programme investing in talent [241-244], and India’s share of the global AI talent pool [50-52]. Finally, all parties stressed sustainability – Germany’s AI “lighthouses” and 60 climate-focused projects [44-47], Sinemus’s AI-driven reduction of pesticide use [258-263], and Rajkumar’s smart-manufacturing energy-efficiency agenda [153-155].


Key take-aways and proposed actions


(i) Deepen the co-creation model outlined in the IA Pact, treating the partnership as an implementation-driven collaboration rather than a one-sided transfer [54-56].


(ii) Leverage the existing Fraunhofer-India MOU to launch sector-specific joint pilots in manufacturing, agriculture, health and cybersecurity [205-206].


(iii) Arrange visits of German delegations to CDOT’s facilities to build mutual trust [210-214].


(iv) Jointly draft smart-manufacturing standards, cross-border data-flow safeguards and energy-efficiency frameworks [153-155].


(v) Collaborate on quantum-communication research and post-quantum encryption [172-179].


(vi) Expand AI-enabled fraud-detection platforms such as SanchalSathi.gov.in and the Financial Risk Indicator [190-194].


(vii) Scale AI innovation labs and high-performance computing resources for SMEs, as demonstrated by the Hessian AI lab [264-267].


(viii) Establish a joint roadmap for trustworthy AI testing and certification, building on Germany’s AI quality-testing hub and Fraunhofer’s uncertainty metrics [264-269][106-118].


In conclusion, the summit succeeded in cementing a shared vision of an Indo-German AI partnership built on complementary strengths – Germany’s precision engineering, regulatory expertise and funding capacity, and India’s vast talent pool, market scale and rapid AI adoption. While high-level consensus was strong on inclusive, trustworthy and sustainable AI, moderate disagreements persisted around the perceived difficulty of AI integration and the exact framing of the partnership’s operational model. All speakers reaffirmed the summit’s motto – “welfare for all” – as the guiding principle for Indo-German AI collaboration [39-42].


Session transcriptComplete transcript of the session
Anandi Iyer

And we are delighted to note that many of the activities that were outlined in the MOU have already kick -started. So I think to that extent, it’s fantastic that Dr. Padhya is here today and will share with us his ideas on where an Indo -German collaboration in innovation and AI can have a value proposition for both countries. I’m also delighted to have my own colleague, Dr. Thomas Kuhn, who has come all the way from Germany from the Fraunhofer Institute of Experimental Software. He will be sharing some of our experiences and competencies in AI, particularly with focus on workplace changing, which is, of course, one of the biggest topics we have today, but also manufacturing, agriculture, and health.

And I think one of the key topics that Fraunhofer can present to the Indian audience is data space. We have created a secure data space in the cloud for many years where challenges remain. Industries are brought in and discussed. So you will hear a lot from him. I’m particularly delighted that four Bangalorean CEOs are here today. We have with us Sindhu Gangadharan, who needs no introduction, I think. Anyone knows that, and who doesn’t know is coming under a stone. So we also have Anshuman Eversi for CTO of Mercedes -Benz Research and Development Center in Bangladesh. And we have Dattatri Salagame from Robert Bosch Software Solutions. And, of course, Prashant, he’s sitting at the back, Doris Rami from Averior, which was earlier Continental.

So as you can see, a power -packed industry captain session, which will follow after the panel. And last but not the least, we have Murali Nair from Bretelsmann Stiftung, which is a think tank, which has been producing products. A lot of knowledge papers around India and Germany, and actually positioning India as a positive partner for Germany long before Germany even started looking at us so seriously. So thank you for that, Murali. I’m standing here in front of you today as the head of Raunafa in India for the last 18 years extremely proud that Raunafa is one of the largest applied research ecosystems in the world we have 76 institutes in Germany we are present in more than 80 countries we produce two patents for every working day we are also the inventors of the LP3 white LEDs and many such inventions but what really singles us out in this innovation corridor is the fact that we take research from lab to the market in the shortest period of time so this is an area where India really needs support and we are delighted that we were one of the first movers coming into India 18 years ago and even today there are very few international R &D organizations that are active in India you We have been earning more than 70 million euros in the last 10 years from research contracts with Indian industry, government and research organizations.

And we have an absolutely amazing network of partners in India, including organizations like CDOT, but also with Indian industry, which is really gearing up for innovation. So I think we came in at the right time and took advantage of this, you know, innovation trajectory that has picked up in India. And it’s exciting to see what’s unfolding before us. I have to say that today what we are bringing together is really a thought -provoking session. We only have 15 minutes, so I would like to get on with business immediately. And we hope that some of the thoughts that we are leaving behind with you today will resonate and will actually trigger a long -term dialogue and engagement so that we can take the Indo -German innovation collaboration to greater heights.

I would now like to invite Mr. Georg Enzweiler. To kindly deliver his special address.

Georg Enzweiler

Good morning, ladies and gentlemen. I don’t know whether it’s maybe that we are just as crazy as you are sometimes that we are big fans of Karnova here in India. It’s always delightful to be invited by you here in India. You do amazing work here in the country. I wish we’d have several Anandi ears here in the country to multiply your efforts. So thank you very much for the invitation. And I’m impressed by the distinguished panel of speakers today and of panelists here. Thank you. All of the panelists and speakers here today are drivers of technology, and so it makes sense that you are also at the forefront of innovative AI technology. From increasing crop revenue, improving diagnostics, or minimizing errors in manufacturing, AI undoubtedly has huge potential for social and economic good.

AI is predicted to contribute between $5 and $15 trillion to the global GDP by 2030. But there are also questions, of course. How can we ensure that this growth is inclusive? How do we minimize negative effects for people and the planet? What kind of effect would this have on labor markets, for example? After all, the motto of this summit is actually welfare for all. India, for one, has ambitious goals to create massive computing infrastructure in big parts powered by green energy. Germany is investing in so -called AI lighthouses, which foster AI innovations for climate and environmental protection. Since 2020, Germany has funded over 60 projects leveraging AI for sustainability. The topics range from wildfire prevention and renewable energy to biodiversity monitoring.

And circular economy. that government, academia, and industry work hand -in -hand. By promoting research and development, creating a clear regulatory environment, and investing in the training of skilled workers, we can unlock a broader potential for AI. India shows huge potential here. It accounts for 15 % of the global AI talent pool and has the highest AI skill penetration rate. In terms of research and development in AI, India ranks third in the world after the U .S. and China. India and Germany, with their longstanding scientific partnership, are natural partners in creating solutions that are sustainable and inclusive for all. Hence, only two days left. Ministers Vaishnaw and Wildberger launched the India -Germany IA Pact, a new partnership focused on implementation -driven collaboration across government, industry, research, skill development, and innovation.

It will include aspects as AI for industry and manufacturing, talent, skills, and mobility, joint research, innovation, and infrastructure, and overall AI for social good. So I very much look forward to the discussion of today’s session, that it can give fruit for thought on this most relevant topic. Thank you again for the invitation, and I wish all of us an interesting discussion on, many of these pertinent topics. Thank you.

Anandi Iyer

Thank you for your kind words, Mr. Ensweiler. We now have the pleasure of hearing Dr. Thomas Kuhn, who heads the Division of Embedded Systems in Fraunhofer IESE. As you all know, we talk artificial intelligence. In Fraunhofer, we call it augmented intelligence, which means that human intelligence is still at the core of what we’re talking in terms of AI. It’s a pleasure to bring Dr. Thomas Kuhn to India. The floor is all yours.

Dr. Thomas Kuhn

Yes, thank you for inviting me. Yeah, what is Fraunhofer? doing in the field of AI. You all know AI is on one hand large language models, mostly driven by the US. We have huge open source models coming from China. So what is Fraunhofer contributing? And our goal is to bridge industry and academia to support the industry in creating new products and bringing ideas into practice. And I’ve chosen here a few highlights on what we can do as Fraunhofer. So for example, when you speak about AI, the question is, how reliable is the result of AI? Do you get concrete, do you get reliable information? That’s one topic, trustworthiness. How can we achieve trustworthy responses? That’s something that Fraunhofer is researching on.

For sustainable growth, it’s also very important to preserve knowledge that is available in the company, especially if people retire. Small and medium -sized enterprises have exactly this problem that knowledge is leaving, and you cannot get this knowledge back because the new employees simply don’t have this knowledge anymore. So how can you preserve knowledge? We call this industrial AI, so bringing AI models into practice, creating specific, specialized AI models based on company data, based on sensitive data, not personal data of persons, but sensitive data for business, sensitive data for companies, so that they can support these companies with these needs that the AI can provide. For example, it creates a colleague that is an expert for a specific device and that can help.

humans to work more efficiently with this device. And last but not least, as my colleague already did introduce, data spaces are a key technology for that because for AI, we need to be able to train AI and therefore we need a reliable access to data and a way to share data, but based on rules. And that’s what data spaces are meant for. With Raumhofer, we have the ability to instantiate data spaces that scale up to 10 ,000 transactions per second, and that’s quite a lot. So Raumhofer is structured into alliances, and the most important one is the big data and the artificial intelligence. Alliances, so more than 30 institutes that team up with AI, in creating AI strategies, each institute with its own field of competence, but all of us support best practices, studies, expert opinions, and we support all companies in creating new solutions.

And as you can see, we have a lot of fields that our institutes are working on, life sciences, healthcare when it comes to diagnostics, to personalized medication, logistics and mobility for optimizing supply chains, making them more resilient, production and industry for that all, very important. We also need to keep production in Europe to remain resilient, so it must be economic to produce in Europe. Energy and environment. Renewable energies. Smart goods. self -managing grids. That’s very important topics also for AI and big data, business and finance, security, to make sure that our data is well protected. I cannot go into detail for all of these topics, so we’ve collected a few samples on web we are doing from embedded AI, image analysis, collaborative transport, cancer diagnostics, swarm intelligence, of course also defense topics.

I cannot go too much into detail here but AI has a huge impact and why the ways regarding general purpose LLMs is not our ways. The applications that can be created with AI, the transformation of industry that is very well our knowledge. to bring AI to use. for industry. And therefore also to use for all, and then we are back at the topic of this event here, welfare for all and happiness for all, because every industrial revolution in the end did yield much more wealthiness for everybody. I have brought some highlights, and I promise I won’t go into details here, no worries. So the virtual colleague is one thing that is important when people leave companies.

We have expert knowledge, we have experience, and this is very hard to preserve. So just imagine a virtual colleague that follows the people in the field, that learns from them, and that keeps the knowledge in the company once this person leaves, so that the company knowledge is retained. That’s one project. One project that Fraunhofer is doing in the field of AI. to support industry and also smaller medium -sized industries. Trustworthiness, that’s very important. We all know that AI is not perfect, and probably some of you have already tweaked the AI maybe a little bit on purpose to see, okay, how far can I go? With that, AI basically is stupid. It has a lot of knowledge, but it has no understanding of that.

Trustworthy AI means we observe, for example, the field of use for a specific AI model and track whether we are leaving this field. We have an uncertainty wrapper that gives you, with each response of the AI, an uncertainty value that says, okay, how trustworthy is this particular response? And that’s very important when we want to apply AI also in safety -relevant environments, for medical. for medical diagnostics, for image recognition, and also in traffic. That’s very important then to know how safe is the assumption of the AI here. Moderated AI training is very important when we want to work together. For example, a robot that can grab something that every human can do, grab into a box and take something out.

For a robot, that’s a very big challenge. We can train robots to do that, but it’s much more effective when we can do a federated training where everybody can contribute data, and in the end we receive one model, but one model where you cannot derive any sensitive data back because nobody wants to share their sequence. So that’s where data stays. That’s where data stays. That’s what data spaces are meant for. We share data based on rules. We can provide data just for training of AI. Everybody will see the raw data for that. We just use it for training of AI. And here, Fraunhofer is researching both government policies and efficient implementations of cross -company data spaces.

So that’s my pitch, my motivation. I hope I was able to give you some ideas, some insights in what we can do with AI besides of open AI. I think Sam Altman is not here, so I can’t say that. They’re doing great things, but there are also great things that can be done in addition to that. Thank you.

Anandi Iyer

Thank you very much, Thomas. This was just an appetizer. As you know, we have a lot of time. We have very limited time, but anybody who’s interested can visit our stand, where we’ve also demonstrated some of the use cases in AI and agriculture, and also AI and medicine, and AI in manufacturing. You’re welcome to visit us at the German Pavilion. I would now like to invite the CEO of Center for Development of Telematics to share a few thoughts on where India and Germany can work together. Thank you so much.

Dr. Rajkumar Upadhyay

Thank you so much, Anandi. Thank you, Anandi. Quite pervasive, it is being applied to almost all the sectors. And where it is not applied, it will be applied sooner or later. because it underpins the competitiveness, productivity, and societal resilience. And for India and Germany together, Germany being the precision engineering expert, you know, for years, and India bringing the scale, I think there is a very good way that India and Germany work together and align how the AI is taken forward in a very responsible, ethical way, so that not only it helps us economic progress, but also meets the social good challenges. As we know, India’s AI system is quite vibrant, ranking third globally for its competitiveness and developer activities.

Our AI mission is already in progress. We are investing more than $2 billion. We have given to all the startups 38 ,000 GPUs, and this will be further increased. It could bring $1 .7 trillion value to India’s economy by 2035. And the tech sector particularly is productive to contribute $280 billion revenue this year itself. Germany’s AI market is also rapidly expanding, driven by strong industrial integration, particularly in manufacturing and substantial public -private investment. Germany’s AI market is expected to be nearly 30 billion euros by 2030. I wrote that there are more than 600 startups in Germany, and we also have a lot of startups. I think together, how the startup ecosystem could work together for the benefit of not only both the countries for bringing global good to this sector.

So there will be a fair win -and -win situation for India and Germany, partnership between two great countries given our diplomatic relations, given the, as I said, the precision in engineering which Germany is famous, and the scale what we bring in India. I think there are many areas where we can contribute, but I thought I will focus on two, three. One, I think it was mentioned in smart manufacturing. Germany’s leadership in industry 4 .0 and India’s expanding manufacturing. I think this government has invested heavily on the manufacturing under various schemes like PLI and DLI schemes, and it is going to go forward. And therefore, how do we use AI in smart manufacturing? We would like to work with Germany in terms of developing smart manufacturing standards, cross -border industrial data flows and safeguards, enable energy efficiency.

And since we arrived late in this manufacturing space, a lot of our manufacturing plants have actually started using AI. In fact, recently there was a science evenistic wherein the Tata’s, how they are going to bring down the cost. using AI in manufacturing. The second point I think Anandi brought up for agriculture, I think Parnasar is already working in India for agriculture, and agriculture being one of the key partners for economy, the service sector and the economy, agriculture sector, I think it also makes us a good partner for how do we improve the productivity, the yield, the income for farmers, the efficiency gain, productivity gain, and that will be another area we can work together. I would also like to say that, you know, given, especially given the economic progress we are making, and we see a lot of challenges coming from the cyber security area.

We get millions of attacks in our country, millions of attacks. and I’m not sure what is happening in that part. So, therefore, AI in cybersecurity is very, very important, which we use to some extent and we would like to learn from Germany, whatever, and we would like to share these, what we are doing, how we are using AI to, you know, come to a pattern because the kind of data flow, for example, in India, we receive data rate at 10 terabyte per second. 10 terabyte per second is the data rate. And how do you process this data in real time and be able to tell that, that how and where the attacks are happening? As far as the CDOT is concerned, we work in various areas.

I don’t know, some of you may be aware. I would like to place it on record that India made its own 4G and 5G, and it was launched by Honorable Prime Minister in September 25. And this was a journey we took alone. For 6G, we would like this journey to be together with the world because we were actually not. We never developed 1G, 2G, 3G. Suddenly, our Honorable Prime Minister said, you know, 4G and 5G, and we did in two, three years, and it is launched today. Thank you. close to 170 ,000 base stations are radiating, generating a data rate of 5 petabyte. And we’ll go further. So 5G, 6G is one area. The second area is quantum because as all we know, many of the leading economies have put a sunset date to the current level of encryptions which will be broken because of the quantum computers.

And therefore, there is a need to work on quantum together. We, particularly in CBOX, we work only in quantum communication. In fact, I am the chair of the National Quantum Communication Hub along with IIT Madras. We fund a lot of research into quantum to start up. So this could be another area where we could work together. The third thing is I’m not aware what kind of frauds happen in Germany or happen or don’t happen. In India, there were, you know, there were a series of frauds, cyber fraud calls or this type of calls. So we developed the system at India scale and are successful today in blockage. For example, I’m just giving example of we used to get 1 million regular international calls, and we were getting 15 million spoofed calls, 15 million spoofed calls per day.

And at my gateway, in 5 milliseconds, I have to decide whether this call is a real call or spoof call. Today we have deployed the system in India, which in 5 milliseconds tells you this is a spoof call, drop it, and this is a real call, send it. So at India scale. We have developed a platform called SanchalSathi .gov .in. I don’t know how many of you are aware. This is, again, a digital intelligence platform. One side is exposed to the customers, the citizens, and in back it is connected to all kinds of stakeholders. What used to happen is, as Telecom, I have declared one, let me first check the time. So, yeah, I think it’s happening.

So one is that, you know, in telecom I have said this number is turbulent. Bank doesn’t know. Police doesn’t know. So what we did, we integrated all of the stakeholders in one single platform, which we called digital intelligence platform. Once I declared this number to be a turbulent number, for any reason, it is flagged in bank, police, everywhere, so that everybody takes care. Today, when the actual financial transition happened between A to B, there is a module which we have developed called FRI, financial risk indicator. Before the money is transferred, the bank digs through the database and says, this money is going to be, is it safe? So my system tells me, no, it is not safe.

This guy is not a safe guy. So the transition is stopped. So we have done the fraud management at scale. We would like to work with Germany to further enhance it and how to use AI into this. And going forward, I would only say my time is over. AI should not widen inequalities. It should strengthen inclusion. Productivity and resilience. Let us ensure AI becomes a pillar of sustainable economic growth and social good. As Anandi said, we already have an MOU with Fraunhofer, which we are working together. I would be very happy that, you know, if this partnership goes, must go on. And we together are going forward, given the geopolitical changes happening, we’re going forward together.

We will go and develop the technologies together for the wider social good. Thank you so much.

Anandi Iyer

Thank you, Dr. Upadhyay. I can only underscore that it has been an absolute privilege working with you. And it is not something where German technology is being given to CDOT, but we are actually co -creating. There are India -specific challenges for which teams from CDOT, who are extremely qualified and they have amazing infrastructure, I would really appreciate at some point, Dr. Upadhyay, some of our German delegates, maybe not this time, but next time, come and see the kind of work you’re doing. Because I think that brings a different level of understanding inside and trust. Without wasting any time, I’m sure all of you are waiting to listen to Professor Dr. Sinemus. We had her yesterday, and I must tell you that it’s really amazing to see, first of all, a woman as a head of innovation and digitalization in Germany.

Prof. Dr. Kristina Sinemus

to get the leg between research and then bringing it really to society and to economy growth. And at the end of the day, I’m coming out of the biotechnology area, and I think we really have lost a lot of chances in coming from the research really bringing to economic growth. And I think AI has the opportunity, and as well quantum, as you mentioned, to do this. And, Your Excellencies and gentlemen, ladies, it is a great honor for me to deliver a special address. I never have given a special address, and try to do this in the German National Technology Forum. I’m coming out of a region where we are working together. struggle together with Fraunhofer.

Even in my hometown, we have two, and one of this is the Fraunhofer Cyber Security, where we have the digital hub of cybersecurity as well. So we built a startup ecosystem in the area on the interface of cybersecurity and digital and AI. So many of this could be an anchor point afterwards. As Minister for Digitalization and Innovation, I very much welcome the opportunity to discuss how we can turn AI into concrete public value in manufacturing, in agriculture, in healthcare, and through trustworthy AI, as Thomas Kuhn has pointed it out. The AI Summit, all is about AI and we see how dynamic the process is going on. AI is already transforming how we produce, how we heal, how we grow food, and how we govern.

And the key question therefore is not if AI will shape our society. The question is how. And whether economic development and social growth move forward together rather than separate directions. And looking at the dynamic AI is going forward, we really have to be careful that we balance these economic development and the social part. And I think this is something which has to do with trust or trust as well. In my region developed a clear strategy. And our digital strategy and AI strategy are based. On a simple but a helpful demanding principle. That is. technology must serve people, not the other way around. Our AI -made Innocent Agenda combines innovation with responsibility. And we want AI that strengthens competitiveness, but also social coherence, fairness, and trust.

And this is the way we invest not only in research and infrastructure, but also in practical projects that deliver tangible benefits for citizens and business. So we have a funding program. It’s over $60 billion. And we invest in people who have a vision in how AI can benefit humans. So we invest in 170 startups till now, and I want to give you some examples so that you get an idea. And I start with healthcare, because healthcare is one of the most promising fields for social good. We support innovative projects that would be very difficult to realize without our public backing. And we fund, for instance, one project, it’s called DEMO, and DEMO, it’s not DEMO, it’s DEMO, and it’s a small group, three people who are working on PhD at the Technical University of Darmstadt by Alma Mater, and I was happy to give them 1 .8 million euro, because they are developing a load -bearing roboting wheelchair.

A session to a walking robot. that can safely and autonomously overcome barriers such as stairs or uneven ground using advanced robotics and AI. This project will help for the people with mobility impairments. That is no science fiction. It is a difference between dependence and independence participation on everyday life. And I think that is really a good example of how we invest our funds. Another example is RISCA, Risk Certification in Cardiology Using AI. The goal is to build a clinic decision support system that analyzes patterns in ECG recordings with AI and detects cardiovascular diseases at an early stage. so you can go through prevention. That are two of a lot of AI and healthcare startups we are giving our funds in.

And a second field I would like to highlight is agriculture. Even for India, agriculture is a very important area and a rich and crucial pillar of economic development and food security. Again, we support a project and there is only one of a lot on the early detection of plant diseases. Using automated analysis of satellite data, so we even connected with the space, AI models can recognize signs of plant stress and disease at every early stage of growth circles. So, you can really reduce pesticides and analyze before. So, you can really reduce pesticides and analyze before. So, you can really reduce pesticides and analyze before. AI can, at the end of the day, make agriculture more sustainable and more productive at the same time, and I think this is something where we have to go through.

Another example in our strategic agenda in the future of AI is that we set up an AI innovation lab at Hessian AI, co -funded by my ministry, and we provide a high -performance computing environment specifically for AI applications, along with advisory services for science, business, and public administration. So this is a particular focus in the startup and SME. We want to enable smaller players to develop innovation AI solutions, and we want to be, at the end of the day, in a position that trustworthy AI is not a slogan but a practical challenge. And this is why we established an AI quality and testing hub in Hess, a public -private company that develops methods and tools to test.

AI systems. So at the end of the day, the aim is to translate the idea of trustworthy AI into testable criteria and practical producers that business can use real deployments. So you can say that is trust for AI and we give it as trustworthy criteria. To the end of the day, I think, looking on the discussions we have had in the last days, that the inner German dimension is particularly important. India has shown with initiatives like other UPI and IndiaStack how digital public infrastructure can scale and enable innovation at massive levels. Thank you. Germany can bring expertise in regulation, not too much, data protection. and quality assurance. Together we can build bridges between scale and safeguard, between innovation and rights, between economy, development, and social good.

Because we are thinking grounded on the same values, democratic values. And let me close with a conviction that guides our work. AI will not automatically lead to better outcomes. It depends on the choices we make, what we fund, how we regulate, which ecosystems we build, and whom we include. So let us proceed on our democratic values and go forward with a union, German collaboration in AI. Thank you.

Anandi Iyer

Thank you so much, Dr. Sinemus. It was really a thought -provoking and a very concrete talk on what we can do with AI. There is the scope for the German collaboration. Without much ado, let me take forward what we all on the topics that you have mentioned by inviting the four CEOs to please come on stage. Can I request Atatürk, Sinemus, Anshuman and Prashant to kindly come up? We have a few minutes to really get into the topics. And I would like to just start by very quickly saying that In the last few days, I’ve attended quite a few of the sessions here, and there are so many nomenclatures for AI. It’s been defined as a transformational technology that comes once in several years.

People give examples of the steam engine and of the Internet revelation and Industry 4 .0, which dramatically changed the way we live and work. We are now riding an AI transformation, but complete with anxiety, with speculations, with admiration, but also with excitement. So let me take the audience through to what makes a CEO stay up at night. And let me start with Pratatvi. Vosh has been a pioneer in India. You know, you came here a century ago, and you’ve stayed on, resilient to the Indian market, which has not always been very easy. And you’ve been a pioneer in India. You cover a lot of base, signing across manufacturing, software, medical, and many more sectors. How does the vision board of Bosch lead in terms of the AI development and deployment?

Dattatri Salagame

Maybe the first question not keeps me awake in the night. The fact that you are 100 -year -old and, you know, if you are 100 -year -old, the effort that you make changes also higher. So that keeps me awake at night. As Bosch, I think we are working on two sides of it. So one side, you know, if you see we are deploying AI into the market. You know, if you see autonomous driving, AI cockpit, or AI in healthcare. So these are our products which go into the market. So fundamentally, we are, you know, navigating. We are navigating a new business model, new behavior of consumer, and developing new products for that. on the other side now we are using AI to disrupt long held beliefs of how we build software how we do engineering so this is my say while we are huge excitement to position our AI centric products in the market I would be dishonest if I say we don’t have anxiety how to get over the paradigm shift of engineering and software development using AI so these are the two edges at which we are operating now thank you so much

Anandi Iyer

Sindhu let me come to you you are the world’s leader in enterprise application software one time I remember you telling me that 97 % of companies worldwide use SAP it is already embedded in your workflows even before people are talking and wondering how to go about it but how do you see you have a responsibility because you have a captive client base How do you see these innovative technologies being rolled out to companies while they still have anxieties about their business? And how do you go about assuaging those fears and building in a model that works on layers that are already set and, like he said, very difficult to move right now in retrospect?

Sindhu Gangadharan

Well, first of all, Anandi, it’s great to be here. It’s an absolute honor to be amongst this very distinguished panelist and you and all the previous speakers who have been saved. Yeah, I think, like Zazadeh said, we live in times which are changing, particularly in technology, right? I mean, literally every day you have to open up and see what is the new LLM that’s out there, right? So that’s the way it’s been and which we are now in the open. So in that sense, this is… The assurance, when we say, when we are talking about the world’s largest enterprise application, the failure from SAP, it’s a lot of… The cost that our customers place on us, many of them sitting right here at those tables, are too hard.

And so then when we say 87 % of the world’s business partners, such as an SAP system, it’s a matter of lack of responsibility and a lack of trust, which we continue to take forward, given the 1226 industries and across 12 portfolios that we have, right? But at the same time, given the pace at which technology is changing, our responsibility also from an AI perspective is embedding AI across the core of our business processes. So if you take a process like a lead to cash or a total risk force management or a design to upgrade, strategic sourcing or procurement, this is where we show our customers that we’re not helping you to make your best, we’re doing the most compliant and the most ethical and the most responsible manner, right?

And so I want to make me say, to make sure and protect the choice. Customers also have the choice to run in a secure manner, in a compliant manner, in an ethical manner, and in a responsible manner, and which means also translating from several of our speakers. You need to be able to explain to this decision that you are making or asking the user to make, giving that explainability, the transparency, the fairness of that decision -making, the auditability, right? And that becomes even more important because today when we are talking about urgentity, we are talking about autonomous workflows, we are talking about decision -making. Yes, the human is in the room, but part of the workflow is completely autonomous, right?

And so that’s a lot of the work that’s keeping us busy at night and mornings, I would say, and really making sure that when we say customers can run at their best, in a compliant, in a secure manner, in a responsible manner? How do we make sure that they are to go through the research?

Anandi Iyer

Tim, what is the holdup and failure for customers to run in a safe manner? Whether you’re up at night or not, you’re certainly on your toes. The only person who’s been up at night for several months now, and I can personally vouch for this, Krishan, Continental is now a Romeo. It has not only changed in name, but also in terms of its core processes, some of its business sectors. So innovation is not only in the name, it’s also what you do. What is a Romeo now up to and what is happening?

Prashant Doreswamy

First of all, thank you, Amandeep and Neera. I’m sure all the panelists agree. I think in India, we have moved from support apps to center of excellence. I think in Aumavio, we’re focusing on two things. One is certainty of allocation for our customers, for the audience, innovation for the stakeholders. And the second was customer -centric innovation. Just to let you know, one year I think we’ve been able to gain efficiency improvement in R &D in excess of 20%. How did we do this? Actually, there are cool things. I think we do with one strength and core technology, be it in the plants in terms of how do we enhance the quality using air, be it in terms of, you know, any product that’s done in products, there’s always a lot of false calls, which requires a lot of efforts for you to redo that.

So that’s where the implementation of AI in terms of enhancing the quality. And the second is certainly government development, I think using the GitHub, what I was talking about, where this work is coding and also for developing different test scenarios and a test situation. So these are the things that we are really focusing on. One, as I said, how do we really enhance the velocity in engineering and R &D? We’re using this automated technology, and also we do something called ReckNet. In automotive, the biggest challenge today with so much of complexity of technology is coming with the requirements, because always you leave one or two, which is very, very important. Like how do you really enhance this in terms of the ReckNet?

And second is the attention of the agent, which is used in the enterprise in the simple function like finance, controlling, or market analytics. So this helps us in giving a very good summary for the leadership to take decision based on the data. And third is certainly on the product innovations. And we have a place here in the Jammu and Pakistan, where we have a display, which is dollar code here in India, one of the autonomous mobility, which is called Enhanced Light Fusion. We have been able to, with the use of AI, enhance only the… the camera vision for easier detection and the second is something called e -travel companion where you seamlessly interact with the vehicle and the third is like our driver will seamlessly access with the car.

These are the three which was developed innovations which is displayed in the German stand as well. So in summary, I think we continue to drive because of what we call our increase to seamless integration into the operation.

Anandi Iyer

Thank you so much Prashant. Let me go to Mercedes. Mercedes -Benz is a North Star in innovation. I mean you’ve been around several decades. It’s also been a very competitive environment in manufacturing which a lot of people say that Germany has lost its edge. But it’s an old school field. So how do you bring in AI into the second kind of operations, which is reflected in your standardized operations and very, I said, old school procedures? Is it challenging to get AI into it now?

Anshuman Awasthi

Good morning, everybody, and thank you for having me here. Yes, Mercedes has been one of the most important companies and we plan to remain virtual. We’ve been doing this for many, many years. AI as a technology is not a challenge. We don’t see any technology as a challenge. And we have been using it. I think if I’m not wrong, Mercedes -Benz was the first automotive company to bring AI into the cars in 2019 itself. So if you’re driving cars from 2019, you must have experience AI. It’s a challenge. It’s a challenge. Today our focus is monthly forward works. For example, we want to offer technology to our customers. That means you drive the car and you experience a lot in the car, how the car behaves, whether you direct the car, etc.

And whether the parts are behaving in the seat. And the second part that we also are planning to do is tighten our operations using AI. We want to bring operational efficiency using AI. And many things have been mentioned. One such thing we are trying to implement. Now it can bring benefit to all of us. So, we have technology. It’s very exciting. Because sometimes you feel that it’s making so many things evident. I think that’s now and then has been the same thing and same impact so we are looking further to going ahead with using it.

Anandi Iyer

And ,of course, NBRDI in Bangalore is playing a very big role into this process?

Anshuman Awasthi

Yes. The 2019 application was largely developed NBRDI that was sent to by our AI experts back then.

Anandi Iyer

It was so good we have another 5 minutes and I’d like to use this for a rapid round talking about India so we’ve always heard about the big combination of manufacturing excellence in Germany, engineering, scale and speed in India Let me ask Dr. Tri, is this heightened German interest in India, is it offensive or defensive?

Dattatri Salagame

I would say it was long overdue. I think given the capabilities of the two countries and the common interests that we share, for reasons unknown, we play it very low, while many others play it significantly high. So I think now we can collaborate on many dimensions, not just the precision engineering of Germany, which more than that. Because the pragmatic approach Germany brings in to solve problems from the first principles is what is required now. Because we are in a phase where we have to cut the norms of AI and to marry the substance of AI. So for this, I think we, I mean, we all know in this room… Thank you very much. So it might be a moment generally, you know, how we can remain together.

So it’s an opportunity that I see. Thank you.

Anandi Iyer

That’s very diplomatically put. Sindhu, I come to you. You have just inaugurated a huge campus and your ambitions of taking more than 14 ,000, 15 ,000 people. The controversial question that I’d like to put to you is they say that the cost of the charge is four is to one. That is, for one German, you can get four Indians in it. Not only one. Not even one. Srinu, I will write answers. No, of course. The value of the charge is just the opposite, right? So we are looking at a demographic dividend in India, but there is a huge aspect of inclusion, productivity, skilling, reskilling. How does that play into your vision for India?

Sindhu Gangadharan

You started off by talking about the campus, right? And we were talking only about the campus. What’s the charge point of view? I don’t know. I don’t know. I don’t know. I don’t know. I don’t know. campus, right? So that, and I think we just had Chancellor Max coming to campus as well, and looking at India as a constant detachment, it’s clear of my hands. I don’t see it in any of these conversations. Let’s not be, today when we look at, and we mentioned GitHub for pilot, and if I just look at CloudCode, the latest SOPAs that we have, the speed at which you can do things is of course changing, but what it means is experts can focus on high -value SOPAs, right, and high -value domain use cases, and this is the beauty of the pace at which innovation is being, and this is also why companies like SAP or anybody else here in SOPA are invested in India, because they want to build a system of countries and research question is always the right.

So we really want to bring this and the pace at which our people here in India are able to take forward, understand the context, drive forward things, drive that leadership, creation of IC, it’s the change we’re talking about.

Anandi Iyer

Thank you, Sindhu. I have actually two more provocative questions for my other two, but due to the paucity of time, we’ll have to take a break. Because we still have Morley talking about his study, but I think the fact is clear that the opportunity you have in India has been an opportunity like never before. It’s an influential point. It is an opportunity for cooperation and working together in a very, like Mr. said, with shared values, a very clear agenda. And I thank you for being here and for sharing your thoughts. And can we really quickly come and talk about your study, please? because we just have two minutes and we have to close so quickly come Thank you so much.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Many of the activities outlined in the Indo‑German memorandum of understanding on innovation and AI have already been kick‑started.”

The knowledge base includes a statement that many of the activities outlined in the MOU have already been kick-started, confirming the claim [S1].

Additional Contextmedium

“Fraunhofer’s secure, rule‑based data‑space infrastructure enables high‑throughput AI training while protecting sensitive corporate data.”

The knowledge base notes that Fraunhofer has created a secure data space in the cloud for many years and highlights it as a key topic for Indian audiences, providing additional context to the claim about data-space infrastructure [S14].

Additional Contextmedium

“The summit’s theme was framed around inclusive, climate‑friendly AI growth.”

The AI Impact Summit 2026 is described as operating under the guiding theme “people, planet, and progress,” which aligns with the inclusive and climate-friendly AI focus mentioned in the report, adding contextual support [S37].

External Sources (88)
S1
IndoGerman AI Collaboration Driving Economic Development and Soc — -Prashant Doreswamy- Representative from Averior (formerly Continental), expert in automotive technology and R&D
S2
IndoGerman AI Collaboration Driving Economic Development and Soc — Speakers:Dr. Thomas Kuhn, Dattatri Salagame, Prashant Doreswamy Speakers:Georg Enzweiler, Prof. Dr. Kristina Sinemus, D…
S3
The myth of the lone genius: How scientific revolutions really happen — An epistemological footnote: American historian and philosopher of science Thomas Kuhn has developed atheory of scientif…
S4
IndoGerman AI Collaboration Driving Economic Development and Soc — Thank you for your kind words, Mr. Ensweiler. We now have the pleasure of hearing Dr. Thomas Kuhn, who heads the Divisio…
S5
Secure Talk Using AI to Protect Global Communications & Privacy — It’s great points. And as you said, I think the silos of data that we have, and in fact, you talked about training the m…
S7
IndoGerman AI Collaboration Driving Economic Development and Soc — Speakers:Anshuman Awasthi, Anandi Iyer
S8
WSIS Prizes 2025 Winner’s Ceremony — – **Rajkumar Upadhyay** – Dr., Representative from Centre for Development of Telematics, India India’s AI and Facial Re…
S9
IndoGerman AI Collaboration Driving Economic Development and Soc — -Dr. Rajkumar Upadhyay- CEO of Center for Development of Telematics (CDOT), expert in telecommunications, quantum commun…
S10
Fireside Chat The Future of AI & STEM Education in India — Welcome to the panel, sir. Let me now invite Dr. Raj Kumar, Founding Vice -Chancellor at O .P. Jindal University. Dr. Ra…
S11
IndoGerman AI Collaboration Driving Economic Development and Soc — -Anandi Iyer- Head of Fraunhofer in India (18 years), moderator of the session This comprehensive discussion on Indo-Ge…
S12
IndoGerman AI Collaboration Driving Economic Development and Soc — And we are delighted to note that many of the activities that were outlined in the MOU have already kick -started. So I …
S13
IndoGerman AI Collaboration Driving Economic Development and Soc — – Dr. Rajkumar Upadhyay- Dattatri Salagame
S14
https://dig.watch/event/india-ai-impact-summit-2026/indogerman-ai-collaboration-driving-economic-development-and-soc — And I think one of the key topics that Fraunhofer can present to the Indian audience is data space. We have created a se…
S15
IndoGerman AI Collaboration Driving Economic Development and Soc — – Georg Enzweiler- Dr. Rajkumar Upadhyay- Dattatri Salagame- Sindhu Gangadharan – Georg Enzweiler- Prof. Dr. Kristina S…
S16
IndoGerman AI Collaboration Driving Economic Development and Soc — -Prof. Dr. Kristina Sinemus- Minister for Digitalization and Innovation (Germany), background in biotechnology
S17
IndoGerman AI Collaboration Driving Economic Development and Soc — Speakers:Dr. Thomas Kuhn, Prof. Dr. Kristina Sinemus Speakers:Georg Enzweiler, Prof. Dr. Kristina Sinemus Speakers:Geo…
S18
IndoGerman AI Collaboration Driving Economic Development and Soc — -Sindhu Gangadharan- CEO (company not explicitly mentioned but appears to be SAP based on context), expert in enterprise…
S19
IndoGerman AI Collaboration Driving Economic Development and Soc — Speakers:Dr. Thomas Kuhn, Dr. Rajkumar Upadhyay, Sindhu Gangadharan Speakers:Georg Enzweiler, Dr. Rajkumar Upadhyay, Da…
S20
Welfare for All Ensuring Equitable AI in the Worlds Democracies — This panel discussion focused on democratizing AI’s impact globally and preventing the concentration of AI’s economic va…
S21
Germany ramps up AI funding to close global tech gap — Germany is planning to increase its AI research funding by almost one billion eurosin the next two years, aiming to narr…
S22
https://app.faicon.ai/ai-impact-summit-2026/indogerman-ai-collaboration-driving-economic-development-and-soc — And as you can see, we have a lot of fields that our institutes are working on, life sciences, healthcare when it comes …
S23
Discussion Report: AI Implementation and Global Accessibility — The speakers demonstrated strong consensus across all major discussion points, including workforce disruption challenges…
S24
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Yeah, so this is an educator’s dilemma. To what extent we support the use of AI and the more important ethical and respo…
S25
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Agreed with:Dr. Rajkumar Upadhyay, Dr. M P Tangirala — Cross-sector collaboration is necessary for effective scam preven…
S26
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S27
Global Enterprises Show How to Scale Responsible AI — Gurnani argues that trustworthy AI is fundamentally about end-user confidence, requiring three key enablers: security as…
S28
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jungwook Kim: Thank you. So the question is dealing with the safety or security issues around the AI and it’s a public o…
S29
Science as a Growth Engine: Navigating the Funding and Translation Challenge — Low to moderate disagreement level. The speakers largely align on core issues like the importance of long-term investmen…
S30
Diplomatic policy analysis — Global collaboration:Policy analysis helps identify shared interests and opportunities for cooperation, fostering consen…
S31
Sharing business perspectives on the road to the WSIS+20 review — In conclusion, the dialogues at the panel conveyed an emphatically positive sentiment towards the achievements within th…
S32
The digital economy and enviromental sustainability — By aligning policies across sectors, there is potential for greater collaboration and unified efforts towards environmen…
S33
Secure Finance Risk-Based AI Policy for the Banking Sector — The conversation framed AI governance not as a constraint on innovation but as an enabler of sustainable, trustworthy AI…
S34
Laying the foundations for AI governance — Artemis Seaford: That is a great question. So there is a misconception that companies do not want regulation. And maybe …
S35
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Goldman’s intervention moved the conversation from high-level policy frameworks to practical implementation challenges. …
S36
Responsible AI in India Leadership Ethics & Global Impact — Democratization vs. enterprise luxury concern: Panelists discussed whether responsible AI practices risk becoming access…
S37
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S38
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S39
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Strong consensus emerged around human-centered AI principles. Austria’s State Secretary Alexander Perol articulated the …
S40
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 2. Policy Harmonisation and Regional Integration: This perspective was partially challenged by an audience member who e…
S41
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Factors such as restricted access to …
S42
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Industry Perspectives: Systems Integration Challenges However, Flanagan highlighted a fundamental challenge: “The la…
S43
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S44
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S45
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — Explanation:Both speakers view India’s massive scale as an advantage for AI implementation rather than a challenge, sugg…
S46
AI as critical infrastructure for continuity in public services — These key comments fundamentally shifted the discussion from a technical and regulatory focus to a human-centered perspe…
S47
AI for Democracy_ Reimagining Governance in the Age of Intelligence — I say this because the theme of this session, AI for Democracy, cuts to the heart of the matter. We are not simply debat…
S48
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S49
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. Good. Great. We heard from our excellent speakers at the very begin…
S50
Skilling and Education in AI — This discussion focused on leveraging artificial intelligence as a tool for development and equality in India, examining…
S51
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S52
Inclusive AI Starts with People Not Just Algorithms — Consensus level:High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers…
S53
Welfare for All Ensuring Equitable AI in the Worlds Democracies — This panel discussion focused on democratizing AI’s impact globally and preventing the concentration of AI’s economic va…
S54
How nonprofits are using AI-based innovations to scale their impact — This panel discussion focused on an AI cohort program for nonprofits that ran from September to December, anchored by Pr…
S55
AI, Data Governance, and Innovation for Development — Sade Dada: Thank you. I believe that is a really, really important question. I mean, the whole purpose of us gathering h…
S56
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Additional actions may be required upstream or downstream for the policy to make sense. Microsoft’s smart campus strate…
S57
Policy Guidelines — line 8 , not sold for a profit: This term may be preferable to the vaguer term ‘noncommercial’. The intention is to allo…
S58
The impact of regulatory frameworks on the global digital communications industry — Ms Ellie Templeton is a Cyber Security Research Assistant at the Geneva Centre for Security Policy. She has an Internati…
S59
Preamble — CR: cost ratio with respect to the good; cost ratio with respect to the good.
S60
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Success will be measured not just by the environmental efficiency of AI systems, but by their ability to deliver meaning…
S61
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S62
Meta expands AI infrastructure with $1 billion sustainable facility — The US tech giant, Meta,has announcedthe construction of its 30th data centre in Beaver Dam, Wisconsin, a $1 billion inv…
S63
Closing remarks – Charting the path forward — Bouverot emphasizes that AI governance must address environmental concerns by incorporating sustainability measures. Thi…
S64
Governments and Technical Community: A Successful Model of Multistakeholder Collaboration for Achieving the SDGs — This comment articulated the bidirectional nature of learning required for effective collaboration, moving beyond the co…
S65
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Another noteworthy perspective is the advocacy for connecting multilateral spaces through people and not solely through …
S66
IndoGerman AI Collaboration Driving Economic Development and Soc — The strategic rationale for this partnership lies in the complementary strengths of both nations. India accounts for 15%…
S67
IndoGerman AI Collaboration Driving Economic Development and Soc — “Germany is investing in so -called AI lighthouses, which foster AI innovations for climate and environmental protection…
S68
Bridging the AI innovation gap — The speaker emphasizes that AI technologies must actively contribute to sustainable development goals regardless of a co…
S69
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S70
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jungwook Kim: Thank you. So the question is dealing with the safety or security issues around the AI and it’s a public o…
S71
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S72
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S73
Sharing business perspectives on the road to the WSIS+20 review — Expectations were set for the panelists to provide tangible examples of effective multi-stakeholder partnerships. These …
S74
Main Session on Sustainability &amp; Environment | IGF 2023 — Discussed the need for such collaboration during the panel discussion It was acknowledged that the private sector can p…
S75
WS #103 Aligning strategies, protecting critical infrastructure — Francesca Bosco: Thank you so much. And thanks a lot for the invite. And it’s an honor to speak today. I’m very sorr…
S76
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — The discussion aimed to explore and establish concrete pathways for India-Israel collaboration in artificial intelligenc…
S77
(Day 3) General Debate – General Assembly, 79th session: morning session — The 79th session of the UN General Assembly highlighted the complex and interconnected nature of global challenges facin…
S78
Responsible AI in India Leadership Ethics & Global Impact — Democratization vs. enterprise luxury concern: Panelists discussed whether responsible AI practices risk becoming access…
S79
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — The panel demonstrated a maturing field where practitioners are converging on core principles while offering complementa…
S80
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Strong consensus emerged around human-centered AI principles. Austria’s State Secretary Alexander Perol articulated the …
S81
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Thank you very much, Chair. Namaste. I wish to thank Prime Minister Mori for organizing this timely and very important c…
S82
Comprehensive Report: UN General Assembly High-Level Meeting on the 20-Year Review of the World Summit on the Information Society (WSIS) Outcomes — Artificial Intelligence Governance and Ethics Human rights | Legal and regulatory Lithuania called for artificial inte…
S83
(Day 1) General Debate – General Assembly, 79th session: morning session — Vice President: His Excellency, President of the Republic of Guatemala. The Assembly will hear an address by Her Excel…
S84
AI for good global summit — What are the gains from AI? Fifty-five percent of the gain in GDP from 2017-2030 will be due to improvements in producti…
S85
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Jonathan Mendoza Iserte:Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this t…
S86
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — Cathy Li: Thanks for having me. So first of all, just a very quick overview. The work is done not by one organisation…
S87
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Another speaker argues that digitalisation and technology should promote sustainable development goals and uphold human …
S88
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. W…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Anandi Iyer
2 arguments129 words per minute1839 words852 seconds
Argument 1
Collaboration as co‑creation, not technology transfer
EXPLANATION
Anandi emphasizes that the Indo‑German partnership is based on joint development rather than one‑sided technology hand‑over. She stresses that both sides work together to create solutions tailored to Indian challenges.
EVIDENCE
She states that the collaboration is not about Germany giving technology to CDOT, but about co-creating solutions, highlighting the joint effort and mutual contribution [212-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from client-service to genuine co-creation partnerships in Indo-German AI work is highlighted in the discussion summary, with Anandi emphasizing joint problem-solving for India-specific challenges [S2].
MAJOR DISCUSSION POINT
Collaboration as co‑creation, not technology transfer
AGREED WITH
Georg Enzweiler, Dr. Rajkumar Upadhyay
DISAGREED WITH
Georg Enzweiler
Argument 2
Fraunhofer’s MOU with Indian partners and demonstration of use cases
EXPLANATION
Anandi notes that a Memorandum of Understanding with Fraunhofer has already led to concrete AI demonstrations in agriculture, medicine, and manufacturing. This showcases the practical outcomes of the partnership.
EVIDENCE
She mentions that many activities outlined in the MOU have started and that the stand at the German Pavilion displays AI use cases in agriculture, medicine, and manufacturing [1-4][129-131].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session overview notes Fraunhofer’s long-standing presence in India and substantial Indian research contracts, underscoring concrete collaborative outcomes and use-case demonstrations [S2].
MAJOR DISCUSSION POINT
Fraunhofer’s MOU with Indian partners and demonstration of use cases
G
Georg Enzweiler
2 arguments100 words per minute470 words280 seconds
Argument 1
IA Pact sets implementation‑driven partnership for inclusive AI
EXPLANATION
Georg outlines the newly launched India‑Germany IA Pact, which focuses on concrete, implementation‑oriented collaboration across government, industry, research, and skill development to ensure AI benefits are inclusive.
EVIDENCE
He describes the IA Pact as a partnership covering AI for industry, manufacturing, talent, joint research, innovation, infrastructure and social good, launched by Ministers Vaishnaw and Wildberger [54-56].
MAJOR DISCUSSION POINT
IA Pact sets implementation‑driven partnership for inclusive AI
AGREED WITH
Dr. Thomas Kuhn, Prof. Dr. Kristina Sinemus, Dattatri Salagame, Sindhu Gangadharan
DISAGREED WITH
Anandi Iyer
Argument 2
AI lighthouses and 60 sustainability projects funded by German government
EXPLANATION
Georg highlights Germany’s AI lighthouses and the funding of over 60 AI projects since 2020 that target climate and environmental protection, demonstrating a strong governmental commitment to sustainable AI.
EVIDENCE
He notes that Germany has funded more than 60 projects leveraging AI for sustainability, covering wildfire prevention, renewable energy, biodiversity monitoring, and circular economy [44-47].
MAJOR DISCUSSION POINT
AI lighthouses and 60 sustainability projects funded by German government
AGREED WITH
Prof. Dr. Kristina Sinemus, Dr. Rajkumar Upadhyay
D
Dr. Thomas Kuhn
2 arguments122 words per minute1096 words537 seconds
Argument 1
Fraunhofer’s focus on trustworthy AI, uncertainty metrics and data spaces
EXPLANATION
Thomas explains Fraunhofer’s research on making AI outputs reliable by attaching uncertainty scores and by using rule‑based data spaces that enable secure, governed data sharing for model training.
EVIDENCE
He describes the development of an uncertainty wrapper that provides a trustworthiness value with each AI response [106-107] and the ability to instantiate data spaces handling up to 10,000 transactions per second for secure data sharing [82-84][115-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fraunhofer’s research on trustworthy AI, including uncertainty wrappers and high-throughput data-space infrastructure, is described in the external briefing on their priorities [S1] and reinforced by the discussion summary on data-space capabilities [S2].
MAJOR DISCUSSION POINT
Fraunhofer’s focus on trustworthy AI, uncertainty metrics and data spaces
AGREED WITH
Georg Enzweiler, Prof. Dr. Kristina Sinemus, Dattatri Salagame, Sindhu Gangadharan
Argument 2
Preserving tacit knowledge via virtual colleague and federated training
EXPLANATION
Thomas presents the concept of a virtual colleague that captures expert knowledge when employees leave, and federated training that lets multiple companies improve AI models without exposing raw data.
EVIDENCE
He outlines a project that creates a virtual colleague to retain expert knowledge within a company [97-100] and explains federated training where data stays on-site while a shared model is built, supported by data spaces [110-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a “virtual colleague” for retaining expert knowledge is mentioned in the Fraunhofer research overview [S1].
MAJOR DISCUSSION POINT
Preserving tacit knowledge via virtual colleague and federated training
A
Anshuman Awasthi
2 arguments113 words per minute255 words135 seconds
Argument 1
Automotive AI in vehicles and operational efficiency, collaboration with Indian research
EXPLANATION
Anshuman states that Mercedes‑Benz has been integrating AI into cars since 2019 and is now using AI to improve operational efficiency, with significant contributions from the Indian NBRDI research centre.
EVIDENCE
He notes that Mercedes-Benz was the first automotive company to embed AI in cars in 2019 and that the 2019 application was largely developed by NBRDI, highlighting the India-Germany research link [364-372][385-386].
MAJOR DISCUSSION POINT
Automotive AI in vehicles and operational efficiency, collaboration with Indian research
Argument 2
AI integration seen as non‑challenge, leveraging existing research partnerships
EXPLANATION
Anshuman argues that AI is not a barrier for Mercedes‑Benz; the company has long used AI and continues to rely on established research collaborations to advance its capabilities.
EVIDENCE
He asserts that AI is not a challenge for the company, emphasizing existing usage and partnerships, and mentions the role of NBRDI in earlier AI applications [364-371][384-386].
MAJOR DISCUSSION POINT
AI integration seen as non‑challenge, leveraging existing research partnerships
DISAGREED WITH
Dattatri Salagame
D
Dr. Rajkumar Upadhyay
3 arguments148 words per minute1358 words547 seconds
Argument 1
Joint work in manufacturing, agriculture, cybersecurity, quantum, telecom
EXPLANATION
Rajkumar outlines a broad agenda for Indo‑German cooperation, covering smart manufacturing standards, agricultural productivity, AI‑driven cybersecurity, quantum communication, and telecom fraud detection.
EVIDENCE
He mentions collaboration on smart manufacturing standards and cross-border data flows [153-155], agriculture productivity projects [156-157], AI for cybersecurity handling 10 TB/s data streams [158-162], quantum communication research [170-176], and a telecom fraud-detection system that decides in 5 ms whether a call is spoofed [180-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of Indo-German joint projects include a telecom AI solution for SIM verification [S8] and cross-sector collaboration for scam prevention [S25]; broader sectoral focus aligns with institute activities in manufacturing, agriculture, and health [S22].
MAJOR DISCUSSION POINT
Joint work in manufacturing, agriculture, cybersecurity, quantum, telecom
AGREED WITH
Georg Enzweiler, Prof. Dr. Kristina Sinemus, Dr. Thomas Kuhn, Anandi Iyer
Argument 2
AI for smart manufacturing, agriculture productivity, cybersecurity, quantum communication, telecom fraud detection
EXPLANATION
He expands on specific sectoral AI applications, emphasizing how AI can boost efficiency in factories, increase farm yields, protect networks, enable quantum‑secure communications, and stop fraudulent calls.
EVIDENCE
He cites examples such as AI-enabled smart manufacturing standards [153-155], AI-driven agricultural yield improvements [156-157], AI-based cybersecurity handling massive data rates [158-162], quantum communication initiatives [170-176], and a real-time spoof-call detection platform processing 1 million regular and 15 million spoofed calls per day [180-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sector-specific AI initiatives are reflected in the institute’s portfolio covering life-sciences, logistics, and industry, as well as the telecom AI deployment for fraud detection [S22][S8].
MAJOR DISCUSSION POINT
AI for smart manufacturing, agriculture productivity, cybersecurity, quantum communication, telecom fraud detection
Argument 3
Indian AI startup boom, $2 bn investment, 38 000 GPUs for startups
EXPLANATION
Rajkumar highlights India’s rapid AI ecosystem growth, noting a $2 billion investment, distribution of 38,000 GPUs to startups, and projected economic impact of $1.7 trillion by 2035.
EVIDENCE
He reports that India is investing more than $2 billion in AI, has provided 38,000 GPUs to startups, and expects AI to add $1.7 trillion to the economy by 2035, with the tech sector contributing $280 billion this year [139-143].
MAJOR DISCUSSION POINT
Indian AI startup boom, $2 bn investment, 38 000 GPUs for startups
P
Prof. Dr. Kristina Sinemus
4 arguments116 words per minute1131 words584 seconds
Argument 1
Shared democratic values and mutual funding for trustworthy AI
EXPLANATION
Kristina stresses that AI development must be guided by democratic values, emphasizing fairness, trust, and public funding to ensure AI serves society responsibly.
EVIDENCE
She declares that technology must serve people, not the other way around, and that Germany invests over €60 billion in AI, supporting 170 startups, while promoting trustworthy AI through public-private initiatives [236-240][241-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The broader dialogue on equitable AI and the need for democratic values in AI development is discussed in the “Welfare for All” report [S20].
MAJOR DISCUSSION POINT
Shared democratic values and mutual funding for trustworthy AI
AGREED WITH
Dr. Thomas Kuhn, Georg Enzweiler, Dattatri Salagame, Sindhu Gangadharan
Argument 2
Germany’s AI quality testing hub and principle “technology serves people”
EXPLANATION
Kristina describes the establishment of an AI quality and testing hub in Hess that translates trustworthy AI principles into concrete testing criteria, reinforcing the idea that technology should be people‑centric.
EVIDENCE
She explains that the AI quality and testing hub develops methods and tools to test AI systems, turning trustworthy AI into testable criteria for real deployments [267-269], and reiterates the principle that technology must serve people [236-238].
MAJOR DISCUSSION POINT
Germany’s AI quality testing hub and principle “technology serves people”
Argument 3
Healthcare robotics, cardiology decision support, satellite‑based plant disease detection
EXPLANATION
Kristina showcases specific AI‑driven projects: a load‑bearing robotic wheelchair for mobility‑impaired users, an AI‑based cardiology decision‑support system (RISCA), and satellite‑image AI for early plant disease detection to reduce pesticide use.
EVIDENCE
She cites funding of a €1.8 million project developing a load-bearing robotic wheelchair [247-250], the RISCA AI system for early cardiovascular disease detection [253-255], and an AI model that analyses satellite data to detect plant stress and disease early, reducing pesticide usage [258-263].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Institute activities spanning healthcare diagnostics, personalized medicine, and AI-enabled agricultural monitoring are outlined in the sector overview [S22].
MAJOR DISCUSSION POINT
Healthcare robotics, cardiology decision support, satellite‑based plant disease detection
Argument 4
Germany’s €60 bn AI programme, AI lab, and support for 170 startups
EXPLANATION
Kristina outlines Germany’s substantial AI investment, including a €60 billion funding programme, a high‑performance AI lab, and direct backing of 170 AI startups to foster innovation and economic growth.
EVIDENCE
She mentions a €60 billion AI funding programme, the creation of an AI innovation lab at Hessian AI, and the support of 170 startups with examples of funded projects [241-245][264-267].
MAJOR DISCUSSION POINT
Germany’s €60 bn AI programme, AI lab, and support for 170 startups
D
Dattatri Salagame
3 arguments139 words per minute341 words146 seconds
Argument 1
Bosch’s need for trustworthy AI to support safety‑critical applications
EXPLANATION
Dattatri explains that Bosch is deploying AI in safety‑critical domains such as autonomous driving, AI‑enabled cockpits, and healthcare devices, which require high levels of trust and reliability.
EVIDENCE
He lists Bosch’s AI market deployments including autonomous driving, AI cockpit, and AI in healthcare, emphasizing the need for trustworthy AI in these applications [301-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fraunhofer’s trustworthy AI work, aimed at safety-critical domains such as autonomous driving and healthcare, provides context for Bosch’s requirements [S1].
MAJOR DISCUSSION POINT
Bosch’s need for trustworthy AI to support safety‑critical applications
AGREED WITH
Dr. Thomas Kuhn, Georg Enzweiler, Prof. Dr. Kristina Sinemus, Sindhu Gangadharan
Argument 2
Paradigm shift in engineering and new business models creates anxiety
EXPLANATION
Dattatri notes that the transition to AI‑driven engineering and new business models generates anxiety within Bosch, as it challenges long‑held software development practices.
EVIDENCE
He describes the anxiety stemming from the paradigm shift in engineering and software development caused by AI, acknowledging both excitement and concern [300-305].
MAJOR DISCUSSION POINT
Paradigm shift in engineering and new business models creates anxiety
DISAGREED WITH
Anshuman Awasthi
Argument 3
Autonomous driving, AI cockpit, AI in healthcare devices
EXPLANATION
He highlights Bosch’s concrete AI product portfolio, which includes autonomous driving systems, AI‑enhanced vehicle cockpits, and AI applications in medical devices.
EVIDENCE
He references Bosch’s market-ready AI products such as autonomous driving, AI cockpit, and AI in healthcare [301-304].
MAJOR DISCUSSION POINT
Autonomous driving, AI cockpit, AI in healthcare devices
S
Sindhu Gangadharan
2 arguments160 words per minute672 words250 seconds
Argument 1
SAP’s emphasis on explainability, fairness and compliance for enterprise AI
EXPLANATION
Sindhu stresses that SAP must embed explainability, fairness, and regulatory compliance into its AI solutions to maintain trust with its extensive enterprise customer base.
EVIDENCE
She outlines the need for explainability, transparency, fairness, auditability, and compliance in AI-enabled processes such as lead-to-cash, risk management, and procurement [318-323].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on fairness, transparency, and responsible AI governance aligns with the “Welfare for All” discussion on equitable AI practices [S20].
MAJOR DISCUSSION POINT
SAP’s emphasis on explainability, fairness and compliance for enterprise AI
AGREED WITH
Dr. Thomas Kuhn, Georg Enzweiler, Prof. Dr. Kristina Sinemus, Dattatri Salagame
Argument 2
Enterprise clients need transparency, auditability and ethical AI to overcome fear
EXPLANATION
Sindhu argues that enterprises will only adopt AI if they can see clear, auditable decisions and are assured that AI operates ethically and responsibly.
EVIDENCE
She emphasizes that customers require explainability, transparency, fairness, and auditability, especially as workflows become increasingly autonomous [321-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for explainable, auditable, and ethically sound AI systems are echoed in the equitable AI framework presented in the “Welfare for All” report [S20].
MAJOR DISCUSSION POINT
Enterprise clients need transparency, auditability and ethical AI to overcome fear
P
Prashant Doreswamy
2 arguments148 words per minute449 words180 seconds
Argument 1
AI‑driven R&D acceleration, quality improvement, automotive requirement handling, AI agents for finance
EXPLANATION
Prashant describes how AI has boosted R&D efficiency by over 20%, improved product quality, helped manage complex automotive requirements, and enabled financial risk assessment tools.
EVIDENCE
He cites a 20 % increase in R&D efficiency [338-340], the ReckNet system for handling automotive requirements [346-349], and the Financial Risk Indicator (FRI) platform that blocks risky transactions in real time [194-199].
MAJOR DISCUSSION POINT
AI‑driven R&D acceleration, quality improvement, automotive requirement handling, AI agents for finance
Argument 2
Enhancing R&D velocity and product innovation through AI automation
EXPLANATION
Prashant highlights that AI automation accelerates engineering cycles, enabling faster product innovation and higher operational efficiency.
EVIDENCE
He mentions that AI has led to more than 20 % R&D efficiency gains and that automated technologies such as ReckNet and AI-based quality improvements are central to this acceleration [338-340][345-349].
MAJOR DISCUSSION POINT
Enhancing R&D velocity and product innovation through AI automation
Agreements
Agreement Points
Collaboration is framed as co‑creation rather than one‑sided technology transfer
Speakers: Anandi Iyer, Georg Enzweiler, Dr. Rajkumar Upadhyay
Collaboration as co‑creation, not technology transfer IA Pact sets implementation‑driven partnership for inclusive AI Joint work in manufacturing, agriculture, cybersecurity, quantum, telecom
All three speakers stress that Indo-German AI work should be built jointly, with shared problem-solving and implementation, rather than a simple hand-over of German technology. Anandi explicitly says the partnership is about co-creating solutions for India-specific challenges [212-214]; Georg describes the IA Pact as an implementation-driven, inclusive partnership [54-56]; Rajkumar notes the existing MOU with Fraunhofer and the joint agenda as a co-creation effort [205-206].
POLICY CONTEXT (KNOWLEDGE BASE)
This framing aligns with multistakeholder collaboration models advocated in recent IGF discussions and policy-harmonisation efforts, emphasizing bidirectional learning and bottom-up approaches [S55][S64][S40][S65].
Need for trustworthy, secure AI through uncertainty metrics and data‑space governance
Speakers: Dr. Thomas Kuhn, Georg Enzweiler, Prof. Dr. Kristina Sinemus, Dattatri Salagame, Sindhu Gangadharan
Fraunhofer’s focus on trustworthy AI, uncertainty metrics and data spaces IA Pact sets implementation‑driven partnership for inclusive AI Shared democratic values and mutual funding for trustworthy AI Bosch’s need for trustworthy AI to support safety‑critical applications SAP’s emphasis on explainability, fairness and compliance for enterprise AI
A consistent theme across research, policy and industry is that AI systems must be reliable and governed. Thomas describes an uncertainty wrapper that scores each AI response and high-throughput data spaces for secure sharing [106-107][82-84][115-118]; Georg highlights cross-company data spaces as part of the IA Pact implementation framework [120-121]; Kristina stresses that technology must serve people and outlines a testing hub for trustworthy AI [236-238][267-269]; Dattatri points to Bosch’s requirement for trustworthy AI in safety-critical domains [301-304]; Sindhu underlines the need for explainability, fairness and auditability in enterprise AI [318-324].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for trustworthy AI echo the push for sandbox environments and evidence-based regulation highlighted by Flanagan, as well as the broader tension between rapid market development and slower policy deliberation [S42][S44].
AI as a driver of social and economic development across sectors (manufacturing, agriculture, health)
Speakers: Georg Enzweiler, Dr. Rajkumar Upadhyay, Prof. Dr. Kristina Sinemus, Dr. Thomas Kuhn, Anandi Iyer
AI is predicted to contribute between $5 and $15 trillion to the global GDP by 2030 Joint work in manufacturing, agriculture, cybersecurity, quantum, telecom AI is already transforming how we produce, how we heal, how we grow food, and how we govern AI impact on industry, health, logistics, production Demonstrated AI use cases in agriculture, medicine and manufacturing at the German Pavilion
All speakers highlight AI’s broad potential to boost productivity, improve health outcomes and increase agricultural yields. Georg notes AI’s huge economic potential and its role for inclusive growth [36-42]; Rajkumar lists concrete sectoral collaborations in smart manufacturing, agriculture and cybersecurity [153-157][158-162]; Kristina states AI is reshaping production, health and food systems [228-230]; Thomas mentions AI applications across life sciences, logistics and industry [87-90]; Anandi points to live demonstrations of AI use cases in agriculture, medicine and manufacturing [129-131].
POLICY CONTEXT (KNOWLEDGE BASE)
Sector-specific AI development agendas have been foregrounded in India’s AI strategy and in global AI policy roadmaps that note infrastructure and policy barriers to scaling AI for good [S50][S45][S43].
Emphasis on skill development and capacity building for AI
Speakers: Georg Enzweiler, Prof. Dr. Kristina Sinemus, Anandi Iyer
IA Pact … investing in the training of skilled workers Germany’s €60 bn AI programme … we invest in people who have a vision India accounts for 15 % of the global AI talent pool
The speakers agree that building a skilled AI workforce is essential. Georg cites the IA Pact’s focus on training skilled workers as part of the enabling environment [48-49]; Kristina describes a €60 bn funding programme that invests in talent and supports 170 startups [241-244]; Anandi highlights India’s strong AI talent base and skill penetration [50].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is a core pillar of the ‘worker-first’ AI agenda and of Global South AI strategies, with concrete programs for reskilling, apprenticeships, and nonprofit AI cohorts [S48][S49][S54][S41].
Commitment to sustainable AI and environmental benefits
Speakers: Georg Enzweiler, Prof. Dr. Kristina Sinemus, Dr. Rajkumar Upadhyay
AI lighthouses and 60 sustainability projects funded by German government AI can make agriculture more sustainable and reduce pesticide use Smart manufacturing standards, energy efficiency and sustainability
All three underline that AI should contribute to climate and environmental goals. Georg mentions Germany’s AI lighthouses and over 60 sustainability projects covering wildfire prevention, renewable energy and circular economy [44-47]; Kristina gives examples of AI reducing pesticide use and supporting sustainable agriculture [258-263]; Rajkumar talks about AI-enabled smart manufacturing standards and energy-efficient processes [153-155].
POLICY CONTEXT (KNOWLEDGE BASE)
Sustainable AI is reflected in Green AI guidelines, smart-campus sustainability initiatives, and calls for AI governance to integrate climate responsibility and measurable environmental impact [S60][S61][S62][S63][S56].
Similar Viewpoints
All three CEOs acknowledge that AI is transforming their industries, creating both anxiety about new engineering paradigms and a need for trustworthy, explainable systems, while also expressing confidence that AI is already a mature technology within their organisations. Dattatri describes anxiety around paradigm shifts and safety‑critical trust requirements [300-305]; Sindhu stresses the need for explainability, fairness and auditability to overcome client fears [318-324]; Anshuman asserts that AI is not a challenge and has been used by Mercedes for years [364-371].
Speakers: Dattatri Salagame, Sindhu Gangadharan, Anshuman Awasthi
Paradigm shift in engineering and new business models creates anxiety Enterprise clients need explainability, fairness and compliance for enterprise AI AI is not a challenge … we have been using it for years
Unexpected Consensus
Inclusive, people‑centred AI as a shared priority across government, research and industry
Speakers: Georg Enzweiler, Dr. Thomas Kuhn, Prof. Dr. Kristina Sinemus, Dattatri Salagame
IA Pact sets implementation‑driven partnership for inclusive AI Fraunhofer’s focus on trustworthy AI, uncertainty metrics and data spaces Shared democratic values and mutual funding for trustworthy AI Bosch’s need for trustworthy AI to support safety‑critical applications
While governments often stress inclusivity and industry focuses on profit, here all four actors converge on the principle that AI must be trustworthy, transparent and serve societal welfare. Georg frames the IA Pact as ensuring inclusive AI benefits [39-42]; Thomas emphasizes trustworthy AI for safety-critical use [106-108]; Kristina declares that technology must serve people and funds trustworthy AI initiatives [236-238]; Dattatri reiterates the necessity of trustworthy AI for safety-critical products [301-304]. This alignment of public policy, research, and corporate strategy on inclusive, people-centred AI was not explicitly anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive, people-centred AI is reinforced by high-level IGF statements, inclusive AI declarations, and the shift toward human-centered implementation in policy dialogues [S52][S51][S46][S47][S53][S64].
Overall Assessment

The discussion shows strong convergence on four pillars: (1) co‑creation partnership model, (2) trustworthy and securely governed AI, (3) AI as a catalyst for sectoral development, and (4) the need for skilled human resources and sustainable outcomes. These points are reinforced by both policy makers and industry leaders, indicating a shared vision for responsible, inclusive AI deployment.

High consensus – the majority of speakers, spanning government, research institutes and leading corporations, articulate overlapping goals and complementary actions, suggesting that future Indo‑German AI initiatives are likely to be coordinated, well‑funded and oriented toward social good.

Differences
Different Viewpoints
Perception of AI integration difficulty
Speakers: Dattatri Salagame, Anshuman Awasthi
Paradigm shift in engineering and new business models creates anxiety AI integration seen as non‑challenge, leveraging existing research partnerships
Dattatri stresses that the shift to AI-driven engineering creates anxiety and a paradigm shift for Bosch, whereas Anshuman claims AI is not a challenge for Mercedes-Benz and that the company has long used AI, highlighting a divergent view on the difficulty of AI integration [300-305][364-371].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders differ on integration difficulty; some view large-scale deployments (e.g., India) as an advantage, while others highlight practical implementation barriers [S45][S46].
Approach to AI partnership – co‑creation vs implementation‑driven
Speakers: Anandi Iyer, Georg Enzweiler
Collaboration as co‑creation, not technology transfer IA Pact sets implementation‑driven partnership for inclusive AI
Anandi emphasizes a co-creation model where India and Germany jointly develop solutions, while Georg frames the IA Pact as an implementation-focused mechanism, reflecting different strategic emphases on how the partnership should operate [212-214][54-56].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors ongoing discussions in policy harmonisation about whether partnerships should prioritize co-creation or delivery-focused implementation [S55][S40][S64].
Cost‑benefit perception of SAP campus investment
Speakers: Anandi Iyer, Sindhu Gangadharan
The controversial question that they say the cost of the charge is four is to one I don’t know (response to cost question)
Anandi raises a cost-ratio question about SAP’s campus investment (four Indians per German), but Sindhu is unable to provide an answer, indicating a mismatch in expectations regarding financial efficiency [400-403][410-413].
POLICY CONTEXT (KNOWLEDGE BASE)
Cost-benefit considerations are linked to sustainability and upstream/downstream policy actions in smart-campus initiatives, influencing perceived value of such investments [S56][S60].
Unexpected Differences
Cost‑ratio question about SAP campus
Speakers: Anandi Iyer, Sindhu Gangadharan
The controversial question that they say the cost of the charge is four is to one I don’t know (response to cost question)
Anandi’s probing about a 4:1 cost ratio for SAP’s campus was met with an inability to answer from Sindhu, an unexpected lack of clarity on financial efficiency in a discussion otherwise focused on technology and policy [400-403][410-413].
POLICY CONTEXT (KNOWLEDGE BASE)
The issue directly relates to discussions on cost-ratio metrics for public-good projects and how they are evaluated against societal benefits [S59].
AI as a challenge vs non‑challenge
Speakers: Dattatri Salagame, Anshuman Awasthi
Paradigm shift in engineering and new business models creates anxiety AI integration seen as non‑challenge, leveraging existing research partnerships
While Dattatri highlights anxiety and paradigm shift associated with AI adoption, Anshuman asserts that AI is not a challenge for Mercedes-Benz, an unexpected contrast given the shared industry context [300-305][364-371].
POLICY CONTEXT (KNOWLEDGE BASE)
This tension reflects the broader uncertainty about AI’s transformative potential versus the lack of an evidence base, prompting calls for sandbox testing and highlighting the policy-technology mismatch [S42][S44][S43].
Overall Assessment

The discussion revealed broad consensus on the importance of trustworthy, inclusive AI and the need for joint Indo‑German collaboration. However, clear disagreements emerged around the perceived difficulty of AI integration (Bosch’s anxiety vs Mercedes‑Benz’s confidence), the strategic framing of the partnership (co‑creation vs implementation‑driven IA Pact), and specific financial expectations (SAP campus cost ratio).

Moderate – while most participants align on high‑level goals (trustworthy, inclusive AI for social good), the divergences concern implementation approaches and perceptions of challenge, which could affect coordination, resource allocation, and the design of joint programmes.

Partial Agreements
All four speakers agree that AI must be trustworthy and ethically sound, but they propose different mechanisms: uncertainty wrappers and data spaces (Thomas), safety‑critical trust (Dattatri), explainability and auditability (Sindhu), and testing hubs with democratic principles (Kristina) [106-107][115-118][301-304][318-323][236-238][267-269].
Speakers: Thomas Kuhn, Dattatri Salagame, Sindhu Gangadharan, Prof. Dr. Kristina Sinemus
Fraunhofer’s focus on trustworthy AI, uncertainty metrics and data spaces Bosch’s need for trustworthy AI to support safety‑critical applications SAP’s emphasis on explainability, fairness and compliance for enterprise AI Shared democratic values and mutual funding for trustworthy AI
These speakers converge on the need for inclusive, people‑centric AI that delivers social good, though Georg emphasizes a policy pact, Rajkumar stresses national mission goals, and Kristina highlights democratic values and funding frameworks [39-42][54-56][202-204][236-238].
Speakers: Georg Enzweiler, Dr. Rajkumar Upadhyay, Prof. Dr. Kristina Sinemus
IA Pact sets implementation‑driven partnership for inclusive AI AI should not widen inequalities… strengthen inclusion Technology must serve people, not the other way around
Takeaways
Key takeaways
Indo‑German AI collaboration is being positioned as a co‑creation partnership (IA Pact) that spans government, industry, research, and skill development. Both nations bring complementary strengths: Germany’s precision engineering, regulatory expertise, and funding for trustworthy AI; India’s large talent pool, scale, and fast‑growing AI ecosystem. Trustworthy, responsible AI is a central theme – Fraunhofer’s work on uncertainty metrics, data spaces, and Germany’s AI quality‑testing hub; SAP’s focus on explainability, fairness, and compliance. Sector‑specific opportunities were highlighted: smart manufacturing standards and cross‑border data flows, agriculture productivity, healthcare robotics and diagnostics, cybersecurity, quantum communication, telecom fraud detection, and automotive AI. Industry adoption challenges include paradigm shifts in engineering, need for transparency and auditability, preserving tacit knowledge, and managing anxiety around new AI‑driven business models. Significant funding commitments exist: Germany’s €60 bn AI programme and AI lighthouses; India’s $2 bn AI investment and provision of 38,000 GPUs to startups. Concrete collaboration mechanisms such as data spaces, joint AI innovation labs, and MOUs (e.g., Fraunhofer‑India) were emphasized.
Resolutions and action items
Continue and deepen the co‑creation model outlined in the IA Pact; treat the partnership as implementation‑driven rather than a simple technology transfer. Leverage the existing Fraunhofer‑India MOU to develop joint use‑cases in manufacturing, agriculture, health, and cybersecurity. Organize visits of German delegations to CDOT’s facilities to foster mutual understanding and trust (suggested by Anandi Iyer). Collaborate on developing smart‑manufacturing standards, cross‑border industrial data‑flow safeguards, and AI‑enabled energy‑efficiency frameworks (proposed by Dr. Rajkumar Upadhyay). Jointly explore AI applications in quantum communication, 5G/6G security, and large‑scale fraud‑detection platforms (proposed by Dr. Upadhyay). Set up or expand AI innovation labs and data‑space infrastructures to support SMEs and startups in both countries (mentioned by Prof. Dr. Kristina Sinemus). Create a joint roadmap for trustworthy AI testing and certification, building on Germany’s AI quality‑testing hub and Fraunhofer’s uncertainty wrappers.
Unresolved issues
Specific regulatory frameworks and standards for cross‑border data sharing and AI trustworthiness remain to be defined. How to ensure inclusive growth and mitigate AI’s potential negative social or environmental impacts was raised but not detailed. Mechanisms for aligning funding streams and investment criteria for joint Indo‑German AI startups were not finalized. The “cost of the charge” discussion regarding SAP’s campus and the balance of Indian vs. German staffing costs was left unanswered. Concrete timelines, deliverables, and governance structures for the proposed joint projects were not established.
Suggested compromises
Adopt a co‑creation approach that combines German precision engineering with Indian scale, positioning the partnership as a win‑win rather than a one‑sided transfer. Utilize data spaces to enable data sharing for AI training while enforcing rule‑based access, balancing openness with privacy and security concerns. Align economic development goals with social good by embedding the principle “technology must serve people” into AI projects, reconciling growth with ethical considerations. Encourage joint funding where German expertise in regulation and quality assurance complements Indian investment in infrastructure and talent.
Thought Provoking Comments
AI is predicted to contribute between $5 and $15 trillion to the global GDP by 2030. But there are also questions… How can we ensure that this growth is inclusive? How do we minimize negative effects for people and the planet? What kind of effect would this have on labor markets?… The motto of this summit is welfare for all.
He reframed the discussion from pure economic potential to the ethical, social and environmental dimensions of AI, reminding the audience that growth must be inclusive and sustainable.
Set the thematic tone for the whole panel, prompting subsequent speakers to address trust, responsibility and societal impact rather than just technical capabilities. It led Thomas Kuhn to discuss trustworthy AI and Upadhyay to highlight concrete societal applications such as fraud detection and cybersecurity.
Speaker: Georg Enzweiler
Trustworthy AI means we observe, for example, the field of use for a specific AI model and track whether we are leaving this field. We have an uncertainty wrapper that gives you, with each response of the AI, an uncertainty value that says, okay, how trustworthy is this particular response… This is crucial for safety‑relevant environments like medical diagnostics or traffic.
Introduced a concrete technical mechanism (uncertainty wrapper) to operationalise trustworthiness, moving the conversation from abstract ideals to measurable engineering solutions.
Shifted the dialogue toward practical implementation challenges. Prompted other participants (e.g., Sindhu Gangadharan) to talk about explainability and auditability, and reinforced the need for data‑space governance mentioned later by Upadhyay.
Speaker: Dr. Thomas Kuhn
We have created a secure data space in the cloud for many years… Data spaces are a key technology because for AI we need reliable access to data and a way to share data, but based on rules.
Highlighted data spaces as the infrastructural backbone that enables cross‑company AI collaboration while preserving privacy and compliance—a concept that bridges policy and technology.
Provided a concrete answer to Georg’s question about inclusive growth by showing how rule‑based data sharing can protect stakeholders. Later speakers (Upadhyay, Sinemus) referenced data‑space ideas when discussing cross‑border industrial data flows and regulatory frameworks.
Speaker: Dr. Thomas Kuhn
India’s AI system is quite vibrant, ranking third globally… Germany’s AI market is expected to be nearly €30 billion by 2030. Together we can combine Germany’s precision engineering with India’s scale to create win‑win partnerships in smart manufacturing, agriculture, cybersecurity, quantum communication and fraud‑prevention.
Provided a strategic, data‑driven vision that linked macro‑economic strengths of both countries to specific sectors, and introduced concrete use‑cases (5 ms spoof‑call detection, quantum hub, SanchalSathi platform).
Moved the conversation from high‑level ideals to actionable collaboration areas. Sparked follow‑up questions from Anandi and others about concrete joint projects, and reinforced the earlier call for inclusive, responsible AI by showing real‑world societal benefits.
Speaker: Dr. Rajkumar Upadhyay
Technology must serve people, not the other way around. Our AI‑made Innocent Agenda combines innovation with responsibility… We invest €60 billion, fund 170 startups, and have created an AI quality and testing hub to turn trustworthy AI from a slogan into a practical challenge.
Articulated a policy‑level principle that aligns funding, regulation and ethical design, and gave tangible examples (robotic wheelchair, RISCA, satellite‑based plant disease detection) that illustrate how Germany operationalises responsible AI.
Re‑centered the discussion on public‑value outcomes and concrete funding mechanisms, prompting other panelists to reference their own responsible‑AI initiatives (e.g., Sindhu’s explainability focus, Thomas’s uncertainty wrapper). It also reinforced the summit’s “welfare for all” narrative.
Speaker: Prof. Dr. Kristina Sinemus
We have a platform called SanchalSathi.gov.in that integrates telecom, banking and police data in real time to flag fraudulent or risky transactions… In 5 ms we can decide if a call is spoofed or genuine. This is AI at Indian scale.
Showcased a high‑impact, large‑scale AI deployment that directly addresses societal safety, illustrating how AI can be used for public good beyond commercial applications.
Provided a vivid example that resonated with the audience’s concern for inclusive growth. It prompted Sindhu to discuss explainability and trust, and reinforced the need for robust data‑space and governance frameworks mentioned earlier.
Speaker: Dr. Rajkumar Upadhyay
Our responsibility from an AI perspective is embedding AI across the core of our business processes… we must give customers explainability, transparency, fairness and auditability, especially as autonomous workflows become more common.
Connected the massive market reach of SAP with the ethical obligations of AI, emphasizing that trust and governance are not optional but integral to product design.
Echoed and expanded on Thomas’s trustworthiness theme, leading to a deeper discussion on how large enterprise software vendors can operationalise responsible AI, and aligning with Sinemus’s “technology must serve people” stance.
Speaker: Sindhu Gangadharan
AI will not automatically lead to better outcomes. It depends on the choices we make, what we fund, how we regulate, which ecosystems we build, and whom we include.
Summarised the central paradox of AI—its potential versus its dependence on human governance—providing a concise, thought‑provoking reminder that technology alone is insufficient.
Served as a concluding pivot that reinforced the earlier calls for responsible AI, prompting the moderator to wrap up the session with a focus on concrete next steps and collaborative studies.
Speaker: Prof. Dr. Kristina Sinemus
Overall Assessment

The discussion was steered by a handful of pivotal remarks that moved the conversation from generic enthusiasm about AI’s economic promise to a nuanced debate about trust, governance, and societal impact. Georg Enzweiler’s framing of inclusive growth set the agenda, which was deepened by Thomas Kuhn’s technical exposition of trustworthy AI and data spaces. Upadhyay’s data‑rich illustration of India’s scale and concrete use‑cases (fraud detection, quantum communication) turned abstract ideas into actionable collaboration opportunities. Sinemus and Sindhu reinforced the policy and enterprise perspectives, respectively, emphasizing that technology must be purpose‑driven and ethically embedded. These comments collectively shifted the tone from promotional to critical, introduced new thematic strands (trustworthiness, data‑space governance, public‑value funding), and prompted other participants to align their contributions with the emerging focus on responsible, inclusive AI. As a result, the panel moved toward concrete partnership ideas and a shared commitment to shape AI’s trajectory through coordinated regulation, funding, and cross‑border data infrastructure.

Follow-up Questions
How can we ensure that AI-driven economic growth is inclusive and minimizes negative effects on people and the planet?
Critical for aligning AI development with the summit’s ‘welfare for all’ motto and for guiding policy and investment decisions.
Speaker: Georg Enzweiler
What impact will AI have on labor markets, and how can potential disruptions be managed?
Understanding labor market effects is essential for designing education, reskilling, and social safety‑net policies.
Speaker: Georg Enzweiler
How can AI results be made trustworthy and reliable, especially in safety‑critical domains?
Trustworthy AI is necessary for adoption in healthcare, manufacturing, and transportation where errors can have serious consequences.
Speaker: Dr. Thomas Kuhn
How can companies preserve institutional knowledge when senior employees retire?
Knowledge retention prevents loss of expertise and supports continuity of operations, especially for SMEs.
Speaker: Dr. Thomas Kuhn
How can secure, rule‑based data spaces be implemented to enable AI training while protecting sensitive business data?
Data spaces are a key technology for sharing data across organizations without compromising confidentiality, enabling broader AI development.
Speaker: Dr. Thomas Kuhn
How can smart manufacturing standards, cross‑border industrial data flows, and safeguards be developed jointly by India and Germany?
Standardization and secure data exchange are prerequisites for scaling AI in manufacturing and ensuring interoperability.
Speaker: Dr. Rajkumar Upadhyay
How can AI be applied to improve agricultural productivity, yield, and farmer income in India?
Agriculture is a cornerstone of the Indian economy; AI can boost food security and rural livelihoods.
Speaker: Dr. Rajkumar Upadhyay
How can India and Germany collaborate on AI‑driven cybersecurity solutions capable of processing massive data streams (e.g., 10 TB/s) in real time?
High‑volume, real‑time threat detection is vital for protecting critical infrastructure and financial systems.
Speaker: Dr. Rajkumar Upadhyay
How can joint research on quantum communication and quantum‑safe encryption be pursued between the two countries?
Quantum‑ready security is a strategic priority as quantum computers threaten current cryptographic schemes.
Speaker: Dr. Rajkumar Upadhyay
How can AI‑based fraud detection systems be enhanced and shared between India and Germany?
Improving cross‑border fraud detection can reduce financial losses and increase trust in digital transactions.
Speaker: Dr. Rajkumar Upadhyay
What is Bosch’s vision and strategy for AI development and deployment across its product portfolio?
Understanding Bosch’s AI roadmap helps identify collaboration opportunities and industry‑wide impact.
Speaker: Anandi Iyer (to Dattatri Salagame)
How should SAP roll out AI innovations to its extensive client base while addressing client anxieties and ensuring responsible, compliant AI?
SAP’s large ecosystem makes it a critical test case for responsible AI adoption at scale.
Speaker: Anandi Iyer (to Sindhu Gangadharan)
How can Mercedes‑Benz integrate AI into its standardized, legacy manufacturing operations and processes?
Modernising legacy operations is essential for maintaining competitiveness and operational efficiency.
Speaker: Anandi Iyer (to Anshuman Awasthi)
Is the heightened German interest in India perceived as offensive or defensive, and what are its strategic implications?
Clarifying geopolitical intent can shape future partnership frameworks and trust building.
Speaker: Anandi Iyer (to Dattatri Salagame)
How will demographic dividend, inclusion, skilling, and reskilling be addressed in SAP’s vision for India?
Workforce development is key to leveraging AI benefits while ensuring equitable growth.
Speaker: Anandi Iyer (to Sindhu Gangadharan)
How can trustworthy AI be operationalised through testing methods and quality criteria?
Developing concrete testing frameworks is necessary to move trust from a slogan to a practical requirement for deployments.
Speaker: Prof. Dr. Kristina Sinemus
How can an AI innovation lab with high‑performance computing be structured to support SMEs and startups?
Providing accessible compute resources and advisory services can accelerate AI adoption among smaller enterprises.
Speaker: Prof. Dr. Kristina Sinemus
How can AI be used for early detection of plant diseases via satellite data to reduce pesticide use and increase yields?
Early disease detection supports sustainable agriculture and aligns with both countries’ food‑security goals.
Speaker: Prof. Dr. Kristina Sinemus
How can AI‑enabled load‑bearing robotic wheelchairs improve mobility for people with disabilities?
Demonstrates AI’s potential for social good and inclusive technology development.
Speaker: Prof. Dr. Kristina Sinemus
How can AI be applied to risk certification in cardiology for early disease detection?
Shows AI’s impact on preventive healthcare and potential for cross‑border clinical collaborations.
Speaker: Prof. Dr. Kristina Sinemus
How can data spaces be scaled to handle up to 10,000 transactions per second for AI training?
Technical scalability is crucial for large‑scale AI model training across industries.
Speaker: Dr. Thomas Kuhn
How can federated AI training be achieved so that multiple companies can contribute data without exposing raw data?
Enables collaborative model development while preserving data privacy and complying with regulations.
Speaker: Dr. Thomas Kuhn

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building the Workforce_ AI for Viksit Bharat 2047

Building the Workforce_ AI for Viksit Bharat 2047

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with Dr. Washima stressing that AI is the next transformative technology after electricity and calling for trust-based, collaborative ethical frameworks to guide its rapid deployment for economic development and social good [4][10][11-12]. Chairperson Shubhavi S. Radha Chauhan highlighted the Prime Minister’s “Mani Vision” for a human-centric AI governance model and explained that future AI will rely on small, sector-specific language models rather than monolithic systems, requiring customized competency frameworks and continuous learning for officials [23-24][30-32][33-34].


The moderator introduced a diverse panel-including Brazil’s Guilherme Albusco Almeida, Google Cloud’s Anil Shivastava, and Robin Scott of Apolitical-who would discuss capacity building, governance and international cooperation [44-48]. Veteran technologist Subramanian Ramadorai framed AI as an enabler of societal transformation, noted India’s “third way” between US market-led and Chinese state-led AI, and pointed to the country’s 5.8 million-strong professional base and the need for small, edge-deployed models to serve rural populations [73-76][97-98][99-101][103-104].


Shivastava argued that AI cannot simply be layered onto legacy systems; instead, data silos must be broken, processes re-engineered, multilingual support added, and security and data-sovereignty considerations addressed [122-130][131-138]. Almeida emphasized Brazil-India collaboration in R&D and capacity building, citing Brazil’s ethical-assessment framework and four-tier training programmes for civil servants, and proposed a South-South coalition to scale knowledge sharing [148-155][160-166]. Robin Scott presented survey data showing that only 26 % of public-sector AI implementers understand their own ethical frameworks and that merely 45 % have evaluation plans for pilots, while expressing strong optimism about AI’s productivity potential [182-190][191-194].


In response to environmental concerns, Robin noted a new AI-climate course co-developed with Stanford, Almeida described “green AI” and AI-driven climate monitoring such as Brazil’s forest-registry system, and Shivastava pledged Google’s commitment to carbon-neutral data centres by 2030 [201-203][205-208][215-218]. Minister Dr. Jitendra Singh underscored that governance, capacity building and AI must evolve together, praised recent regulatory reforms that removed 2 000 outdated rules, and stressed that integrity and a human-in-the-loop approach are essential for trustworthy AI deployment [292-301][304-311][322-327][332-338][382-384]. He also linked the launch of the Digital Capacity Building Alliance to the broader goal of creating a global public good that blends AI principles, digital standards and inclusive learning pathways [263-267].


The moderator concluded by noting the successful launch of the blueprint and thanking participants, signalling a commitment to continue collaborative efforts across nations and sectors [387]. The panel agreed that addressing ethical, evaluation and environmental gaps through joint capacity-building initiatives is critical to harness AI for public good [182-190][201-203][263-267].


Keypoints

Major discussion points


Human-centric, ethical AI governance is essential – Speakers repeatedly stressed the need for trust-based, collaborative frameworks that embed ethics, accountability and inclusivity into AI deployment for public services.  [4-10][23-33][61-68][100-103]


Building AI capacity within government workforces – The Capacity Building Commission, Mission Karni Yogi and India’s “Kalmaragi” initiative aim to up-skill civil servants, create sector-specific competency frameworks, and use small, context-aware language models to serve local needs.  [30-33][92-98][251-267]


India’s unique AI ecosystem and digital public infrastructure – Leveraging a large IT talent pool (5.8 million professionals) and existing platforms such as UPI, Digi-Locker and the Digital Public Goods architecture, India seeks to pioneer “small-model” AI at the edge for rural and underserved populations.  [77-88][94-98][92-98]


International collaboration, especially Brazil-India, on AI capacity and ethics – Panelists highlighted joint R&D, shared capacity-building programmes, and the development of ethical-assessment frameworks as ways to shape global AI norms.  [145-166][169-195]


Environmental sustainability of AI deployments – The summit raised the climate impact of AI compute and called for joint government-industry actions, including green-AI research and carbon-neutral data-center commitments.  [196-204][205-214][215-218]


Overall purpose / goal of the discussion


The event was convened to launch a Digital Capacity Building Alliance and to articulate a shared roadmap for “AI for economic development, social good, safe and trusted AI, and human capital,” linking policy, industry and academia to create scalable, ethical AI capacity across governments, especially in the Global South.  [11-13][250-267][276-284]


Overall tone and its evolution


– The opening remarks are formal and visionary, emphasizing responsibility and trust-based frameworks.  [4-10]


– The discussion then shifts to a technical and problem-solving tone, detailing legacy-system challenges, data-security concerns, and the need for re-engineering.  [122-138]


– A collaborative and optimistic tone emerges during the Brazil-India exchange, focusing on partnership opportunities and shared learning.  [145-166][169-195]


– The audience segment introduces a more urgent, alarmist tone about existential risks and timelines, creating a brief moment of tension.  [227-243]


– The closing remarks return to a hopeful and reaffirming tone, stressing integrity, human-in-the-loop principles, and the promise of the newly launched alliance.  [288-306][387]


Overall, the conversation moves from high-level aspiration, through concrete technical and collaborative challenges, to a concluding call for unified, ethical action.


Speakers


Shubhavi S. Radha Chauhan – Chairperson of the Capacity Building Commission; involved in public administration and capacity-building initiatives [S1].


Dr. Washima – Opening speaker; no specific role or title mentioned in the transcript [S3].


Dr. Jitendra Singh – Honorable Minister, Minister of State for Personnel, Minister of State for Personal Grievances and Pensions; focuses on administrative reforms and India’s science & innovation agenda [S5].


Anil Shivastava – Chief Architect for Goodwill’s work in the public sector; senior consultant on AI-enabled public-service solutions (role described in the transcript).


Speaker 1 – Event host/introduction speaker; no explicit role or title provided.


Subramanian Ramadorai – Chairperson of Karni Nagi Bharat and former Managing Director & CEO of Tata Consultancy Services; expertise in technology engineering, government-industry intersections [S12].


Guilherme Albusco Almeida – Senior consultant at the Institute of Management and Corporation in Public Services, Government of Brazil; works on government reform, digital transformation, and AI ethics [S14].


Speaker 3 – Unidentified speaker; no role or title mentioned.


Moderator – Session moderator for the panel discussion [S19].


Audience – General audience members; includes at least one identified participant, Professor Charu from the Indian Institute of Public Administration [S23].


Robin Scott – Co-founder and CEO of A Political Network, a global online network of public servants; focuses on capacity-building and AI governance.


Additional speakers:


Mr. Frager – Mentioned by the moderator when thanking “Mr. Frager”; no role or title provided in the transcript.


Full session reportComprehensive analysis and detailed insights

The summit opened with Dr Washima reminding the audience that “technology, they say, is a great leveler, and AI, they say, is the next big thing after electricity” and that the responsibility of the room was “to carve out trust-based collaborative ethical frameworks so that the demands of the fast-paced dynamic AI-DPD age… are met by a well-informed design and delivery model” [4][10-12]. He linked this to the India AI Impact Summit theme of AI for economic development, social good, safe and trusted AI, and human capital [11-13].


Chairperson Shubhavi S. Radha Chauhan then introduced the Prime Minister’s “Mani Vision”, a human-centric framework for ethical, accountable and inclusive AI governance [23-24]. She argued that the future of AI will move away from massive monolithic models toward “small language models, context-specific, sectoral, and decentralised” [30-31] and that this shift requires “customised, sector-specific competency frameworks” and continuous learning for officials [32-34]. She highlighted the Commission’s work over the past year in developing holistic policy frameworks, operational guidelines, personalised learning pathways and dynamic governance models for training institutions [34-38], and called for the event to serve as a “plea… that Mission for New Delhi, as a public good, must inform every other government” [39-41].


The moderator presented the diverse panel: Guilherme Albusco Almeida from Brazil, Anil Shivastava of Google Cloud, and Robin Scott, co-founder and CEO of Apolitical [44-48]. Their expertise spanned government reform, digital transformation, and global public-service networks.


Subramanian Ramadorai framed AI as a movement that must “elevate humanity” [68-69] and positioned India’s approach as a “third way” between US market-led experimentation and China’s state-led techno-nationalism [73-76]. He underscored India’s talent pool of 5.8 million professionals[77-88] and cited concrete examples such as the Tata AI Saki Immersion Programme, which empowers rural women artists with AI tools [92-93]. He argued that the next billion AI users will interact with “tiny embedded AI in phones, tractors, classrooms, clinics and local government systems” [96-98] and that capacity-building must enable officials to “deconstruct complexities, impose appropriate guardrails on data… before using authentic insights to take decisions” [99-103]. The launch of the Digital Capacity Building Allowance was presented as a way to operationalise these ideas [104-105].


Anil Shivastava warned that AI cannot simply be layered onto legacy systems. He explained that existing IT solutions are “centred… built to solve a specific problem for the kind of technology that we had at that time” and that they contain “silos of data, silos of business logic” [124-128]. Consequently, AI adoption requires re-engineering of systems to provide contextual data, multilingual support for frontline workers such as ASHA staff, and new security and data-sovereignty safeguards [129-138].


Guilherme Almeida highlighted Brazil-India collaboration as a strategic avenue for joint R&D and capacity-building. He noted that Brazil has an “ethical-assessment framework for AI implementation” and four distinct training tracks for senior leaders, IT managers, data curators and general civil servants [160-166]. He advocated a South-South coalition, citing partnerships with Apolitical, to scale knowledge and build a “global perspective” on AI governance [147-155][166].


Robin Scott presented findings from an 8 000-person global survey, revealing that only 26 % of public-sector AI implementers “understand their own government’s ethical frameworks” while 75 % are essentially “freestyling” [182-187]. Moreover, although 72 % plan pilots, only 45 % have evaluation plans [188-190]. Despite these gaps, she reported that “over 90 % of public servants are very optimistic about the role that AI can play” and that the productivity prize for getting AI right is estimated at $1.75 trillion [191-194].


Environmental sustainability was raised by several speakers. Robin announced a new “AI-and-climate” course co-developed with the Stanford Doerr School of Sustainability [201-203]. Guilherme described “green AI” initiatives and the use of AI to monitor Brazil’s Rural Environmental Registry, detecting deforestation and supporting reforestation [205-213]. Anil pledged Google’s commitment to make all data centres carbon-neutral by 2030 and to develop energy-efficient models [215-218].


Minister Dr Jitendra Singh linked governance, capacity-building and AI as “dynamic processes” that must evolve together [292-298]. He praised recent regulatory reforms that removed about 2 000 outdated rules [310-311] and stressed that “integrity” cannot be replaced by AI and that a human-in-the-loop approach is essential for trustworthy AI [322-327][332-338]. He expanded the acronym MANAA as Moral and ethical systems, Accountable governance, National sovereignty, Accessible and inclusive, Actual validity/legitimacy [322-327]. He announced the launch of the Digital Capacity Building Allowance, describing it as a “global public good that fuses AI principles, digital public-good standards and the mission-cum-worthy model” [263-267] and called for collaborative action across governments, industry, academia and civil society [268-270].


Speaker 1 elaborated on the broader policy context, noting that India’s 2020 Mission Kalmaragi created the national digital learning platform “Kalmaragi”, offering personalised learning paths and smart decision-support [251-259]. He positioned the Digital Capacity Building Allowance as a “unique model for demand, design, delivery, and continued evolution” that will bring together funding, standards, industry, academia, civil society and startups to build inclusive, ethical capacity-building solutions for the Global South [263-269].


The audience interjected with a stark warning, invoking the Doomsday Clock (“85 seconds to midnight”) and urging that “Bharat by 2047” must not be a procrastination of responsibility [227-234][236-244]. An audience member, Prof Charu (Indian Institute of Public Administration), asked for a generic, collaborative international AI impact assessment tool [246-250]. The moderator acknowledged the urgency, reiterated the launch of the Digital Capacity Building Allowance blueprint as a concrete step forward [387-390], thanked everyone, requested the dignitaries to stand for photographs, and formally closed the event [387-390].


Overall, the participants reached broad consensus on several core points: the necessity of trust-based, human-centric ethical frameworks; the shift toward small, sector-specific, multilingual AI models; the centrality of systematic capacity-building that includes re-engineering legacy systems; the importance of South-South collaboration-particularly Brazil-India-to develop shared R&D, training tracks and ethical-assessment tools; and the imperative to align AI development with climate-responsibility through green-AI research and corporate carbon-neutral commitments [10-13][23-33][30-32][182-190][201-203][205-213][215-218].


Disagreements emerged around timelines: audience members warned against “procrastination” and demanded decisive action by 2047, while speakers cautioned that a realistic roadmap is needed now rather than waiting for a symbolic future date [227-234][236-244][244]. The request for a generic international AI impact assessment tool remained open, reflecting divergent views on how best to standardise impact evaluation.


Key take-aways from the summit include: (i) AI governance must be human-centric, built on trust-based ethical frameworks and collective policy discourse; (ii) India’s Mani Vision and Mission Kalmaragi aim to embed sector-specific, multilingual, edge-device AI into public services; (iii) effective integration requires re-engineering of legacy systems, contextual data pipelines and new security safeguards; (iv) Brazil-India cooperation is seen as a strategic pathway for joint R&D, capacity-building platforms and harmonised ethics frameworks; (v) significant gaps remain in public-sector AI readiness, notably low awareness of ethical frameworks and insufficient pilot evaluation, despite high optimism and large productivity potential; (vi) environmental sustainability is a priority, with initiatives ranging from AI-climate curricula to green-AI hardware pledges; (vii) the Digital Capacity Building Allowance was officially launched as a global public-good model linking digital learning, AI-enabled governance and inclusive capacity-building; (viii) practical examples such as the Tata AI Saki programme and AI-assisted ASHA workers illustrate AI’s impact on frontline services.


Action items emerging from the discussion are: the operationalisation of the Digital Capacity Building Allowance; Google’s commitment to carbon-neutral data centres by 2030 and collaboration with India on sustainable AI infrastructure; development of a generic, collaborative international AI impact assessment tool (as raised by Prof Charu); deepening Brazil-India joint R&D and capacity-building initiatives, leveraging platforms like Apolitical; creation of sector-specific small language models for edge deployment; establishment of continuous learner-feedback loops within Mission Kalmaragi; and the rollout of an AI-and-climate course in partnership with the Stanford Doerr School of Sustainability.


Unresolved issues include the precise mechanisms, timelines and financing for Brazil-India collaboration; how the proposed international AI impact assessment framework will be localised across diverse legal contexts; concrete steps to raise public-sector awareness of ethical frameworks and institutionalise systematic pilot evaluation; and a detailed roadmap for achieving India’s AI and governance objectives by 2047 without falling into procrastination. Suggested compromises involve adopting a hybrid model that blends AI automation with human-in-the-loop oversight to preserve integrity; balancing rapid deployment of lightweight models with robust ethical safeguards; combining green-AI hardware practices with AI applications that support climate policy; and fostering public-private partnerships that share expertise while maintaining regulatory independence [322-327][263-267][215-218][332-338].


Session transcriptComplete transcript of the session
Dr. Washima

and partnerships from the Capacity Building Commission to deliver welcome remarks. Good afternoon. Thank you, Mustafa. A very good afternoon to all of you, distinguished guests, panelists, fellow participants, colleagues from Karni Yogi Bharat and Capacity Building Commission, and a warm welcome to everyone. Technology, they say, is a great leveler, and AI, they say, is the next big thing after electricity. We as individuals are part of society as individuals and a dual role of individuals. And we are deeply impacted by these two to an extent that we cannot distinguish between these two anymore. As the popular Bollywood line says, Mayor Mary Panhai is actually me. And my AI, in certain context. This room carries the huge responsibility of making that distinction happen.

Responsibility is to carve out trust -based collaborative ethical frameworks so that the demands of fast -paced dynamic AI -DPD age, which constantly creates push -up demands for faster, better, safer public services, is met by a well -informed design and delivery model. Today we gather here as a first step, aligned with the India AI Impact Summit theme, AI for economic development, social good, safe and trusted AI, and human capital. The need for collective discourse at the policy level is crucial, whenever to harness equitable benefits, mitigate risks, and to ensure an inclusive governance transformation. To carry this foundation forward, we have our distinguished panelists, we have our chairperson, and we look forward to the next session. Thank you for listening today.

Welcome, everyone.

Moderator

Thank you, Dr. Washima. I now invite our Chairperson of the Capacity Building Commission, Shubhavi S. Radha Chauhan, to deliver the opening address.

Shubhavi S. Radha Chauhan

Thank you. Thank you, Mr. Sir. Namaskar. It’s my privilege to extend a very warm welcome to all of you on behalf of Team Mission Karni Yogi. And I must disclose that this speech that I’m going to read out is handcrafted. No AI in the process has been used. Yes, absolutely. Be compliant. Our Honorable PM yesterday outlined Mani Vision, a human -centric framework for ethical, accountable and inclusive AI governance. Mission Panayogi has and shall continue to relentlessly craft and embed these wish capabilities that will translate this vision into reality. Every service today must evolve at a pace, hitherto untraceable. It must learn continuously, develop deep competencies and dynamically adapt to eternally emerging work and workspaces. Underprivileged competency and skill is a humanistic capacity, that non -negotiable layer of intellect, diligence and values that has to flavor every decision made and every service delivered by governments and its systems.

From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access to specialized healthcare. It is the quality of this human layer that will ultimately define the quality of service we deliver to our citizens. The future of AI, more precisely the agentic AIs, will not be in massive monolithic models. It will be in small language models, context -specific, sectoral, and decentralized. This would entail creating the customized, sector -specific competency framework that can suitably deploy AI agents to arrive at decision points that solve local needs and problems in its context. Capacity building must therefore focus on enabling our officials to deconstruct complexities, impose appropriate guardrails on data and its use, before evaluation benchmarks.

before using the authentic insights to taking decisions. In the past year, the Commission has developed holistic policy frameworks that have been tested and institutionalized, established operational guidelines, especially those for identifying competency gaps, leading to personalized learning pathways for each one of our learners. Dynamic governance models have evolved for stakeholders, especially our training institutions, ensuring they remain agile and responsive to competency demands. Continuous learner feedback loops, rigorously analyzed, have become integral to refining and strengthening the system. We are at the community portal’s times today as a testament to this remarkable trajectory. It has developed teamwork capacities effectively, at scale, and across the human race, to achieve this diversity of India’s governance ecosystem. It is in the context of this evolving journey that we see today’s event as an opportunity to take the plea, grounded in deep faith, that Mission for New Delhi, as a public good, must inform every other government that is on a similar and seminal mission to deliver inclusive, ethical and impactful public services.

I sincerely hope that this deliberation here produces a cohesive and common pathway for all of us to enter upon as global partners. Thank you so much.

Moderator

Thank you so much, ma ‘am, for placing the panel through the address, placing everything into the context for the panel discussion. Thank you, Mr. Frager, and the remaining proceedings of the evening. Now I take your immediate pleasure in inviting your panelists for today’s discussion and also more later Mr. Professor Amogarai sir May I kindly request Professor Amogarai sir to join us on the rise The last panel discussion will be moderated by Mr. S. Amogarai Chairperson of Karni Nagi Bharat and former M .D. and CEO of Tata Consultancy Services Mr. Amogarai is of a specialty at the intersection of technology engineering and technology governments and institutions and he has worked at key institutions across academia, industry and public policy institutions including as advisor to the R .W .P.

Minister in the National Council on Scale Development We welcome you sir On the panel, we are joined by Guilherme Albusco Almeida from Brazil, a senior consultant at the Institute of Management and Corporation in Public Services, Government of Brazil, working at the intersection of government reform and digital transformation. Anil Shivastava, Chief Architect for Goodwill’s work in the public sector. He currently leads the Public Policy Strategic AI Solution Engagements of Global Cloud in India. And our final panelist for today, Robin Scott. She is the co -founder and CEO of A Political Network, a global online network of public servants. Thank you so much for joining us and taking time out for this session. With this, I hand it

Subramanian Ramadorai

Thank you. So, the mic’s there. Two minutes. Then I’ll say the second. No good answers. You got nothing to do. Before I begin, I want to extend a very warm welcome to the panelists. Thank you so much for agreeing to be a part of this. It will be a learning experience even for me, for sure. After spending over five decades in the technology industry, I’m probably the oldest here. It puts me with immense hope to sit alongside a group of young leaders who are shaping the next chapter of this global technology revolution. Thank you for being here and for looking forward to the exchange. If we look back at past technological revolutions, we rarely talk about the technologies themselves.

Instead, we talk about what they enable. Electricity is not celebrated because we built our plants. It is celebrated because we brought a revolutionary transformation into the world. It is a transformation to the quality of life. AI presences in the Sumedha moment. It gives us unprecedented power not to do things better but to do better things. We think how we explore, educate, govern, create, collaborate, heal and protect the people and the planet most importantly. But the most important question for this summit is not how far we can scale AI but how we can recognize it’s a movement in a direction that elevates humanity. Sometime ago I read an article titled Bridges as Humanity’s Greatest Legacy. It has spoken about the universalization instinct and how it has long leaned towards coexistence, cooperation and balance including.

shared progress. From Rupesh Mahatma Gandhi, India has consistently attempted to build robust and promote peace and harmony across the world. We are entering the era at a time when capitalism is increasingly intertwined with geopolitics and, of course, conflict. That reality demands deep reflection because the choices we make today will determine whether here becomes a nuclear race of the 21st century or the space race that will take humanity to the moon. Globally, AI is framed as a binary race, a market led by experimentation in the United States, versus state -led techno -nationalism in China. However, it might lend India offers a third way, in partnership, of course. For over five decades, India’s IT industry has built trust, reliability, and delivery capabilities across the world.

We know we have 5 .8 million professionals. This legacy gives India any strength to deploy technology safely and, of course, responsibly at every stage of the technology industry. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. For example, the Tata AI Saki Immersion Programme is empowering rural women artists to use AI as a tool for livelihood opportunity.

India’s AI journey is also interested in the digital public infrastructure, which includes RADAR, identity payments, UPI, documents, digital locker and consent framework, the data empowerment and protection architecture, or DUPI as it is called, at a scale. These are not really platforms, they are trust architectures. The next billion AI users may not interact with 3D and parameter models. They may interact with tiny embedded AI in phones, tractors, classrooms, clinics and local government systems. India’s rural opportunity lies in small language models that are absolutely domain specific and can run on edge devices, operate in rural cities, overseas environments, solve real problems. But I would not squadron of these children going through. of the exhibitions, exhibits. As artificial intelligence becomes an embedded across public administration, the challenge for governance is not limited to technology or adoption.

It extends to ensuring that public officials understand how the systems function, what their limitations lie and how human judgment responsibly and ethical consideration must guide this use. Vishen Karvayogi has established a model demonstrating that systemic technology -enabled civil services reform is achievable across diverse administrative contexts. Building on this institutional foundation, the next horizon is to embed AI within workforce transformation frameworks while contributing to the shaping of global norms on responsible AI in governance. In this context, the Summit Master launched a blueprint for digital capacity building and labs that sets out the share of fair work for developing AI and everything public could for public officials’ capacity. Let us take this convention forward to see how the proposed alliance can be operationalized and diverse partners can work together to translate the blueprint into sustainable actions.

I’ll turn over to the questions. For each of the panelists, I’ve got a couple of questions, but I’ll just start with one or two to each of you. And they’re slightly different for each, and none of them have been generated by, again, a disclosure. So let me start with that little shivastava, Google Cloud. Google is at the forefront of building global digital infrastructure and in many ways adoption across enterprises and public services. The question is, governments often adopt new technologies in fragmented ways. What technical and operational risks arise when any systems are layered onto the legacy infrastructure? without structural reform, how can initiatives like Mission Community help align infrastructure modernization with workforce capability development? Before I ask you the next question, please.

Anil Shivastava

So, first of all, thank you. Thank you so much. I’m honored to be here at Bharat Mandapam as part of the panel of esteemed experts. And we are talking about capacity building imperative for Vixit Parth 2047. You know, I think the question that, sir, as we said, is a very important question when we are in the journey of using AI in our day -to -day lives. And specifically, you know, governments, and especially Government of India, has a huge focus of using AI to improve the quality of life for the people of the world. And I think that’s a very important question. for making the lives of our citizens easier? And also the government, civil servants, you know, lives will be much, much easier than the work that they do today.

The kind of efforts that goes into Reni Server delivering citizen services, is there a way that we could actually leverage AI in Reni Server solving for that? Now to sir’s question, I think the key, it is a very important point that AI is not, you know, a layer that you could just put on existing systems. You know, the systems, the IT systems that were developed with the objective of solving specific problems. Please hear me out. Sure. Right. So, you know, the existing IT systems are very centric and they were built with the view to solve a specific problem for the kind of technology that we had at that time. With AI, we are sort of looking to change the way the humans interact with the AI systems.

The existing systems, they actually have silos of data, silos of business logic, whereas AI, as we sort of look at this as more holistically, you need to really have a contextual data for you to train models to make it useful for you. And so we need to really sort of look at reengineering some of our existing IT systems so that it can harness the potential of AI in solutions. And so that’s where we’re solving some of the problems. So that is one point of view. The other is to really sort of prepare data so that we can train models, whether it is the LLMs or the SLMs, whether it is at the edge that we would actually have AI in a small device, a mobile device.

ASHA worker could actually sort of go and can deliver services using AI in their own native language. So, you know, we need to ensure that we support the multilinguality, the in -depth languages to deliver on those services. Now, to build those systems, we really need to not only change the technology, the underlying technology, but also the process that needs to be re -engineered. So that is one aspect that we should think about. Also from a security perspective and data sovereignty perspective, we need to sort of re -look at the… the exposure that AI brings to our existing systems, the kind of vectors that we are, you know, the systems have been built today, we will need to have to re -look at it.

Some of the, you know, vectors or some of the issues that we have today needs to be resolved. So that would be my perspective, sir. Thank you.

Subramanian Ramadorai

Guy, he told me it’s very easy to remember his name because Guy is a new word also.

Guilherme Albusco Almeida

Yes.

Subramanian Ramadorai

How can countries like Brazil and India collaborate more closely in shaping the global conversation around care, trust, and alignment in AI? What do you think are the areas we can collaborate together which will have the greatest global impact as AI becomes more autonomous and more deeply embedded in society?

Guilherme Albusco Almeida

Well, that’s a fantastic question. I will try to bring some aspects of that, but I think we’ll keep answering that until at least 2047. But the point is, first, I think Brazil and India are really close and can collaborate a lot. I can testify that because it’s my fifth trip to India. We’ve been exchanging a lot technology -wise. When it relates to AI, I think we should consider different aspects. We just mentioned here data and data for training models, but I think there’s strong room for collaboration when you talk about R &D, right? Because there’s not only similarities but also complementarities. There are things that are complementary to one another. So I guess that there’s strong room for cooperation and collaboration.

But also in capacity building. I understand that I’m a great fan of Mission Kama Yogi and the Capacity Building Commission. We have similar organizations in Brazil. We have been training civil servants as well through an electronic and an online platform. And I believe that digital infrastructure approach to capacity building is also a way to bring this to more people to make this scalable in a way that we can actually change build knowledge build capacities and make things change within government. Of course we need to be careful about the risks of AI. In Brazil we have developed a framework for ethical assessment of AI implementation. We have also provided some guides explaining not only how AI works, but what caution should you have when you’re using AI within the public service?

Of course, we should consider boundaries and safeguards in AI implementation, but we should not prevent from using it for the betterment of people’s lives and to enhance our population. And I think that training and capacity building is crucial for that. In Brazil, we have at least four different profiles for capacity building, one for senior leaders, one for IT managers, one for data curators, and the other for general civil servants, in which we organize the knowledge you’re supposed to develop and to build in order to use AI properly and to build AI solutions. and I guess going back to the Brazil -India connection I guess both nations are well positioned to I would say lead but to conduct this conversation in a global perspective I think that we have great partnerships with Apolitical as well I’ve been working a lot with them and I think that coalition of willing organizations building knowledge for AI in public service is something that could be built and if you can bring a South -South flavor to that I think we’re better positioned to provide the transition we want to the government and to the world.

Thank you.

Subramanian Ramadorai

Robin you work with governments around the world what are the biggest gaps you see in AI readiness within public institutions? I think that how can we shift the global conversation towards work reinvention?

Robin Scott

Thank you so much. That’s a big question. It is such an honor to be here. And this seating arrangement is particularly meaningful to me because we’re not only honored to partner with the Capacity Building Commission and Mission Kamiyagi, but we are longstanding partners of ENAP, the excellent Brazilian school of government. And Google .org has funded us to provide world -class training for free on AI to a million public servants, and we’re 400 ,000 into that goal, including in India. So this particular configuration is very meaningful. And I also want to say something about Brazil and India, which I think links the two nations. In our experience working with them, they both understand that capacity building is not something that should be pushed to the side.

It is an afterthought. It is an engine of innovation. ENAP has an innovation unit within its school, and it is strategic. and especially with AI, it’s more strategic because you don’t get intelligent technology unless you have people intelligently supported to work alongside and in partnership with that technology. So I really appreciate the ambition and vision that both countries bring to capacity building. I’ll point to just a couple of gaps. One is around ethical frameworks. You mentioned Brazil has one. Most countries have one. According to our data, this is an 8 ,000 -person global survey. Of those people implementing AI in their governments, these are people whose job it is to roll out the technology, only 26 % say they understand their own government’s ethical frameworks.

So in other words, 75 % are freestyling, and that builds a great deal of risk into the system. We also have a gap between talk and ambition and evaluation. So when you talk to leaders, 72 % say they have a pilot or will have one this year, but only 45 % of them say they have a plan to evaluate the performance of that pilot. And there is no point piloting something without evaluation. There is a lot more to say, but I just want to end on a note of optimism. Well over 90 % of public servants are very optimistic about the role that AI can play. And there’s a $1 .75 trillion productivity prize for getting this right, according to BCG. So we’ve got the optimism, we’ve got the energy, and these gaps are big, but they’re not impossible to close.

Thank you.

Subramanian Ramadorai

Just one final question, if you can answer. As we expand AI -centric capacity building, scaling digital platforms, increasing compute and embedding AI into public systems, all of us agree we must also confront the environmental footprint of these technologies. How can governments and AI companies work? How can governments and AI companies work together to ensure that the AI -driven public infrastructure is also aligned with climate responsibility? energy efficiency and sustainable growth. Anyone else? We can all agree with each other or confidently.

Robin Scott

Well, I can just offer, we have developed a course on AI and climate and understanding the links with the Stanford Doerr School of Sustainability. So we literally have a program to answer your question. I’ll leave it. But it has too much money.

Guilherme Albusco Almeida

Well, I think there are two separate ways in which it could be framed. There’s AI for green and green AI. So aiming for sustainability on the power you provide for the GPUs and so on is crucial. But also we could use AI to leverage the potential of climate policies. If you put together data, meteorological data, data from forests and so on, you can bring intelligence and knowledge, which could lead to better informed decisions. In Brazil, we have a system in which we mapped all the forests in private lands. It’s the Rural Environmental Registry. And we’re using AI to understand whether forests have been cut down or not. So we can use different AI tools to promote not only fighting deforestation, but also promoting reforestation.

So there are several different potential tools, and they could be used. And if we can blend them together, we can make even more.

Anil Shivastava

So I would actually say that, you know, Google, first of all, has committed that by 2030, all our data centers will be carbon neutral. And we want to make sure that we want to partner with the government of India to ensure that all the data centers that we are building in the country should have some sort of targets to ensure that they are carbon neutral. Thank you. and there are, you know, designed from a civil perspective but from a technology perspective, we can always build models which are energy efficient. So I would actually leave it at that but I think that’s a very important point for India as a country. Thank you.

Subramanian Ramadorai

Any one question from the audience? Go ahead. Oh, it doesn’t work? Somebody gave you the mic. But you should add ladies first. I don’t know why you… Go ahead, go ahead. Sorry.

Audience

and just take a metric of doomsday clock that was just received on 27th January at 85 seconds to midnight. Now, just simply translate that in a thousand mile downhill journey, an overloaded truck with weapons of mass destruction that aptly disperse our civilization today. We are at 80 meters to the edge of the precipice. You know, that dangerous. We are being so cannibalistic of the future of our children. My question is, should we see Bharat by 2047? It is actually procrastination of our responsibilities. We just don’t want to own up what’s happening today.

Subramanian Ramadorai

I agree, yeah.

Audience

So they have to come as a concert of civilizations by 2026. Go ahead, go ahead, sorry. Chairman, in three precedents of the Federation, I will just take a metric of doomsday clock that was just received on 27th January at 85 seconds to midnight. Now, just simply translate that in a thousand mile downhill journey, an overloaded truck with weapons of mass destruction that aptly disperse our civilization today. We are at 80 meters to the edge of the precipice. You know, that dangerous. We are being so cannibalistic of the future for children. Should Vixen Bharat, my question is, should Vixen Bharat by 2047, which is actually procrastination of our responsibilities, which I don’t want to own up, what’s happening today, and what happened to India, should they have to come as a concert of civilizations that by 2026 itself, right here, right now?

Subramanian Ramadorai

I think confrontation starts very early and a lot of activities are happening in the country 2047 is a dream extraordinary year of our independence what we need to achieve is a roadmap that’s what is important rather than saying nothing is happening and suddenly something will happen in 2047

Speaker 3

one more question here last question please thank you

Audience

I’m professor Charu from Indian Institute of Public Administration we met some of you thanks in regards to the audience my question is with regard to a consolidated international lack of availability of frameworks in terms of AI procurement guidelines in terms of AI ethical frameworks in terms of competence frameworks we do have UNESCO competence framework for AI but we need to hyper localize it to context which Robin and I were talking in terms of various countries in another event so maybe we need to have something a more generic collaborative international AI impact framework assessment tool which could look into capabilities not just at the digital leadership level but across the whole organization or country thanks a lot

Moderator

thank you any comment it was right okay thank you so much honorable minister thank you to all the panelists and participants we are also now joined by honorable minister Dr. Jitendra Singh sir minister of state for personnel minister of state for personal grievances and pensions a visionary leader who has been at the forefront of administrative reforms and India’s science and innovation agenda sir it’s a pleasure to have you with us . Thank you. Thank you so much, Mr. Minister, for joining us and taking time out of your busy schedule. What was just witnessed in the form of a panel discussion is a small reflection of what the Capacity Building Commission and Karnal Yogi Bharat are hoping to achieve through what we seek to announce today.

May I now request all of you, Mr. Minister, to kindly launch the building for holding a musical capacity building alliance by pressing the button

Speaker 1

And to govern at this scale, this space, we need an ever -evolving system. So India built one. In 2020, our leader, Prime Minister Srinivasa Modi launched Mission Kalmaragi to build a future -ready citizen -centred civil services. At the heart of it, the capacity -building commission. The living capability with foundations of trust, empathy and inclusion. Part of this is our great Kalmaragi platform, India’s national digital learning platform, accessible anytime, anywhere. And now, the next two. AI -enabled governments, personalized learning paths, smart decision support, from reactive systems to adaptive capacity model. To both, all governments, especially in the global south, face the same challenge. Demand -driven and technology -destructive. Complex workflows. Move for Agile Institutional Capacity. India has a working model of unscannable cruising, a global public good.

Today, the Capacity Building Commission unveiled a proposal to forge Digital Capacity Building Allowance, an allowance that fuses global AI principles, digital public good standards, and the mission -cum -worthy model. A unique model for demand, design, delivery, and continued evolution. It wants in the mission to build a shape, non -proprietary foundation for capacity building across nations. This alliance aims to bring together a capacity building, a global funding, to drive policy design and standards, industry, to build digital commons and specialized solutions, academia, for the network of knowledge, research, and innovation. Civil Society, the champion systemic equity. and ethical accountability. DPG Partners, to orchestrate ecosystem for spirit impact, startups, to catalyze solutions and co -creation. Steered by the Capacity Building Commission and Karmayavi Bharat of Government of India, a global public good for inclusive, ethical, capacity building.

In the spirit of Selvajan Hithai, Selvajan Siddhai, Velsa for One, Happiness for One.

Moderator

Thank you so much, sir, for launching the blueprint. May I kindly request Honourable Minister Sir and all the dignitaries on the desk to stand for photographs. Thank you. Thank you. Thank you, sir. May I invite Honourable Minister Sir to the podium to deliver the keynote address for the afternoon. Thank you.

Shubhavi S. Radha Chauhan

Of course, you saw how it concluded and Dr. Ramadurai was giving the perspective of how does AI and its use payment for the public services, what are the challenges by way of capacity building. the learnings that we have had under Mission Karni Yogi, how valuable are they going to be to inform this whole journey that all the global partners, global governments, stakeholders are going to further take their steps towards. And the summit has seen the Honorable Prime Minister and the vision by Ruf Mano that he explained yesterday, which is totally bringing in the level of the need and importance of having a human -centric approach, the hands of the human in every decision that is being made that will impact citizens in every which way.

So I think the world capacity building is looking at that capacity and capability that we need to put in the hands of every public servant from the secretary at the policy level to the community level frontline worker. How do they use, navigate and… building the intellect that is needed to make the correct, ethical, modern value -based decisions when it’s going to impact the humankind in general. I think we have had a lot of fruitful insights from our partners, the panelists. And so this is almost a culmination of a long month, months that we have spent in discussing these issues with our stakeholders, whether it’s from the industry, the partners, our service providers, academia, startups. We were able to bring all of them together today.

And the document that has been unveiled by the minister just now is bringing all of those learnings and insights together to call for action in all of us who are working as experts in our fields to come together. collaborate, strengthen each other’s hands and responsibilities to forge that pathway with the human centricity that is required in handling the capacities that we have given to our baby, which is the alien and augmented intelligence. I think, sir, that is where your keynote now would give us the further needed guidance that you think at the political leadership level, how do they view the whole capacity building space? And like Robin said, this is very important that oft sidelined sector of public governance, but now we have the opportunity of bringing it center stage.

I don’t think we should lose this opportunity to take the leap with utter faith. Thank you, sir. Thank you.

Dr. Jitendra Singh

thank you for reminding me that you needed my guidance. And more than me, reminding all the women in the room that they have to listen to my guidance. But I know you have already done enough of that task. But thanks for your kind words and trying to encourage me to stand before this program. Now we are simultaneously talking about two, three things this afternoon. We are talking about governance, we are talking about capacity building, and we are also trying to bring in an artificial intelligence interface, which of course, in any case, we like it or not, we don’t have to do it, it’s all good. You know, we fear of life in every domain that we work in.

Now governance, if you take, governance is a dynamic process, like many other processes in every domain that we are engaged in. So also is capacity building. It’s dynamic. It’s continuous, doesn’t it? And the time would be such a fast track movement that by the time you bare yourself tomorrow, when tomorrow happens you realize that you were only to worry about yesterday. And then upon that, the artificial intelligence. And the role of both in capacity building as well as in governance. And you talk in the context of India today, when you talk of a framework which invocates the best of all the three, I think the most encouraging feature is that we have a government in place, a political dispensation in place, which is supportive of all these ideas and all these initiatives.

Till about maybe 15 years back, we wouldn’t have ever thought of a theme like this gaining priority in an economy. We would have never thought of a dynamic room of this nature. And therefore, this is a dispensation of the Prime Minister Modi, which is not only ready for futuristic ideas and initiatives, but also for future -ready ideas. And that means, while it looks very fanciful, romantic, talking about artificial intelligence, even if a grocer shop puts on a banner outside, they hear my items are sold through AI, I’m sure he’ll at least be able to attract some number of customers. So that’s a new nature. But at the same time, being unrealistic and pragmatic, and Mrs. Radha had also been secretly DOP’d before she took over her present assignment, we are also free to shed away some of the old baggage.

So while we are running into what is new, we should also… We also have the capacity, and we should be non -possessive enough to unbend something which goes to our chest for so many years. And I’m proud to say that in the last one decade, this government has done away with almost 2 ,000 rules. And that sin has not been committed alone by me. Madam Radha is me equally. Kirti Ardhan. And let’s recall, many of the rules were designed for the times that they were designed. They didn’t have thought of artificial intelligence about 100 years back. They were having rules which were haunting us for more than a century. Getting our certificates, documents attested by the economy before.

We didn’t even know that. We have our means. And we also have now means to trust each other. So at the same time, also we have opened up to learn the new practices. And to also, while learning new practices, to be able to learn new practices. So very capacity. Two building, commission sitting with the governors. because when you learn the practices, you must also learn to bring capacity to learn. So artificial intelligence, the basic mantra is to learn to be a good learner. And if you have not learned, then to learn to be a good learner. And capacity building condition is one instrument which helps you do that. And this also was the idea which first came from Dr.

Moti. I think quite a novel idea for a government sector because usually governments are used to work in a status quo mode. So we have a flow of a private sector. Bandhu is there, and here we have Agam there. Because now we have also got over the barriers of private and public sectors. Unless we learn to learn from each other and also give up our sceptical, we will not be actually building up capacity to the optimum. We will be building up capacities which are limited by certain barriers. And that will not be an unbridled learning. So, capacity building commission was there, mission current lobby was being talked about, was also there, creation, IE God, and all these are testament to the governance reforms being accomplished through optimum technologies.

Now, this lies, which is being lodged. Now, when we talk of digital public good, fortunately or unfortunately, I am one of the few who are from the school of science and business. So, if you take out the first word, digital, and just concentrate on public good, you will realize that non -governance is synonymous with public good. So, public good, essentially, would be at the core of the good governance. It is just that now it is started using digital means, so it is fanciful. I am known because now we are adding to the age of human epitome. So we say DPG. So now we are adding DPG. But to a hardcore old fashioned scientist student, if you ask me, I would say nothing.

It is just the same. Good governments, family good. I am just going to write how the day one has been, because I think the Prime Minister himself, one of the earliest declaration was, maximum or minimum government, which another word means, a government which is citizen centric, which is accountable as far as possible, which is transparent, and the ultimate human stress effort is to bring in ease of living. So all that DPG was happening earlier, only the announcement is being made today. Now we see that there are kind of events now. I think the Prachinidhi Commission developed appreciation because they were very independent. They were very much instrumental in taking us when they got these. frameworks.

And I must also congratulate Madam Radha and her team for this launch of digital capacity building allies. But the idea is, yes, even perhaps. But the guiding principle ultimately all of us would agree would be to build at a scale which is optimally inclusive to give as much inclusion as possible and these are others to learn and also to adopt. Otherwise the very purpose of the allies would not have been achieved. And therefore for that purpose artificial intelligence could be a powerful tool. But certainly not around itself. Now the same ministry would be already using it from the experience of the world where the optimum mix is something which is why would I put in a silver at this audience human record of AI plus HIV artificial intelligence plus human intelligence and we learned that in a hard way while being under the CP grams we are feeling very proud that our disposal weight of legions has increased to more than 95 % almost 100 % per week sometimes but everybody went back to the long phase so I told him then secondly we answered greenness disposal 100 % happiness disposal 0 % when we looked back and said I said no that’s something you call happiness index being discussed in the West so we actually had to introduce a human desk over there because everything was happening all night AI and the control would come before you expected it so that the person would end up saying, so somebody there to construct, so that kind of, so I think that hybrid model is something which is ultimately going to be effective in my own means.

As far as I’m concerned, the morning I was listening to the IMF chief, she said India’s progress in the AI initiative is phenomenal. So I think we have already received recognition from the global judges of global benchmarks. But that’s the part what is suited to our conditions. Because a country bar sitting in Gaurav may not be used to talking. We are not used to talking. For example, another example of hybrid model I will give, we have, I think I’m going to leave, who has the voluntary clinic in my constituency. and there were two doctors one was surgically sitting there the other was an AI doctor she is a lady then they take the history they do it right do all the assessment connect it to one of the leading hospitals you are planning 3 -4 of them connect to the super specialist there I know where the prescription is published just about 40 -50 minutes it’s a very new creative health startup now but of course we are not in the school of therapy so when the patient comes the physical doctor talks to him and he feels more gratified than the AI doctor talks to him but she talks exactly the same dialect or better than what he talks so now he speaks Bhojpuri she would speak better Bhojpuri so the debate is over and I am not talking without the udders because in medical partners we have something called placebo effect and Indians are very used to placebo effect you should get the placebo effect if you don’t get well if you don’t get well what should I do?

So you have such a whole new world to question. So it may be bad in government context. So as they say, Indian data, Indian solutions, Indian systems are important. So we need to have a very much digitalized vehicle. Maybe other nations or cultivators may not require that, but we will always be driving it. And I think with this alliance, the Peter model that Radha has suggested, we would be able to engage more in valuation than by preparing this draft chapter with the content standard and the evaluation framework that is required. But in the end, I will just pick up on what she said, the Manav part of it, which the Prime Minister spoke yesterday. Now, how many in this room actually, I mean, have been able to decipher what was meant by that Manav?

not a single hand has gone across hospitals so I think before you leave this room if you have a iPad or a notebook, you can say it out M stands for moral and ethical systems A stands for accountable governance N stands for national sovereignty and second A stands for accessible and inclusivity and finally A stands for validity and legitimacy now when we don’t have the capacity to learn the term, we have the capacity to learn the acronyms and very soon the artificial intelligence will take up this much capacity also we need to be present we need to be present what is the amount of the other day when I was watching this large -length model and we were going to send the language back to the Zohar.

So I said, no need. But along with that, we have also buried that beautiful breed of English that we had in the early age of mission. We can’t see people who know more than five languages, because we know them more than six languages, so many of them. Now we are at the risk of even forgetting our own language. We have someone else to do it for us. So that’s the why, in the end, I think the two years, because what I’ve done, I’ve been into this AI business quite deeply. For the last, it was half a decade, I’ve been trying it in different fields. So ultimately, I think the moral which I draw from myself, because these people can’t be enjoying for others because we have to learn for themselves, is that one has to be intelligent enough to use artificial intelligence.

Otherwise, you don’t get into this business. And we are which is I think the tagline also for some of the media persons which they are there. Artificial intelligence can substitute everything on this planet but it cannot substitute integrity. Now whatever you do the other day the law in this study was writing two days back somebody said now you will be able to get rid of these people it is like this is happening I said no because the doctor is sitting thousands of kilometers away doing an ultrasound on a lady who is thousands of kilometers away so you would not be able to actually do any I said now if I am a doctor and I am smart enough I will then be percentage to my brother’s side when the media peers who have to do the sex determination just sound around me and do that.

So that is integrity. So I think I think that is the most important thing if we are not able to use this with integrity we might run the risk of ending up also not putting to active use so much of other government’s models that came to us but got ruined away not because of reasons attributable to them but the reasons attributable to more of us who had been assigned to handle them. Thank you very much.

Moderator

Thank you so much sir for that insightful address and laying emphasis on the need for integrity which is only possible through the idea of having human in the loop something which also finds mention in the blueprint that we have just launched. with this we come to the conclusion of the event I extend my heartfelt gratitude on behalf of capacity building commission and Karmel Kiwara thank you so much Honourable Minister Sir dignitaries on the rise and all those present here thank you so much I now request the panellists also to just stay back for a quick minute for a photograph with Honourable Minister Sir and the dignitaries on the rise also all present I would like to take this opportunity to invite you to motion Karmel Kiwara in all the five thank you again Sir please thank you so much thank you thank you

Related ResourcesKnowledge base sources related to the discussion topics (42)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Chairperson Shubhavi S. Radha Chauhan introduced the Prime Minister’s “Mani Vision”, a human‑centric framework for ethical, accountable and inclusive AI governance.”

The knowledge base describes the launch of the MANAV Vision by the Prime Minister as a human‑centred ethical AI framework, confirming the existence of such a vision despite a slight naming variation.

!
Correctionhigh

“The report refers to the initiative as the “Digital Capacity Building Allowance”.”

The source mentions a “Digital Capacity Building Alliance” launched at the summit, not an allowance, indicating the name in the report is inaccurate.

Additional Contexthigh

“Future AI will move away from massive monolithic models toward “small language models, context‑specific, sectoral, and decentralised”.”

Discussion in the knowledge base highlights advocacy for smaller, task‑specific models and a balance between large foundational models and specialized ones, providing supporting context for this claim.

Additional Contextmedium

“Dr Washima opened the session, emphasizing technology as a leveler and AI as the next big thing after electricity, and called for trust‑based collaborative ethical frameworks.”

The source confirms Dr Washima’s opening of the session and the human‑centric, collaborative focus of the discussion, though it does not contain the exact wording of the quote.

Additional Contextmedium

“The Commission has developed holistic policy frameworks, operational guidelines, personalised learning pathways and dynamic governance models for training institutions over the past year.”

The knowledge base notes the launch of a Digital Capacity Building Alliance aimed at integrating AI into public governance with a human‑centric approach, adding context to the Commission’s capacity‑building activities.

Additional Contextmedium

“Subramanian Ramadorai positioned India’s AI approach as a “third way” between US market‑led experimentation and China’s state‑led techno‑nationalism.”

The source discusses contrasting AI governance approaches of China and India, providing background that supports the notion of India taking a distinct, middle‑ground path.

External Sources (138)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Shubhavi S. Radha Chauhan- Role/Title: Chairperson of the Capacity Building Commission, Area of expertise: Public admin…
S2
Building the Workforce_ AI for Viksit Bharat 2047 — Agreed with:Robin Scott, Guilherme Albusco Almeida — Importance of capacity building as strategic priority Agreed with:…
S3
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Washima- Role/Title: Not specified, Area of expertise: Not specified
S4
Building the Workforce_ AI for Viksit Bharat 2047 — Speakers:Dr. Washima, Dr. Jitendra Singh, Robin Scott Speakers:Dr. Washima, Subramanian Ramadorai
S5
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S6
Building the Workforce_ AI for Viksit Bharat 2047 — Thank you so much, ma ‘am, for placing the panel through the address, placing everything into the context for the panel …
S7
https://dig.watch/event/india-ai-impact-summit-2026/building-the-workforce_-ai-for-viksit-bharat-2047 — Minister in the National Council on Scale Development We welcome you sir On the panel, we are joined by Guilherme Albusc…
S8
Building the Workforce_ AI for Viksit Bharat 2047 — – Anil Shivastava- Dr. Jitendra Singh – Anil Shivastava- Subramanian Ramadorai
S9
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S10
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S11
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Subramanian Ramadorai- Role/Title: Chairperson of Karni Nagi Bharat and former M.D. and CEO of Tata Consultancy Service…
S13
Building the Workforce_ AI for Viksit Bharat 2047 — Speakers:Anil Shivastava, Subramanian Ramadorai Speakers:Audience, Subramanian Ramadorai Speakers:Dr. Washima, Subrama…
S14
Building the Workforce_ AI for Viksit Bharat 2047 — Guilherme Albusco Almeida from Brazil, noting his fifth trip to India, identified strong collaboration opportunities in …
S15
Building the Workforce_ AI for Viksit Bharat 2047 — Speakers:Anil Shivastava, Guilherme Albusco Almeida Speakers:Guilherme Albusco Almeida, Anil Shivastava Speakers:Guilh…
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S17
S19
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S20
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S21
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S22
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S23
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S24
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S25
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S26
How AI Is Transforming Diplomacy and Conflict Management — I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our …
S27
https://app.faicon.ai/ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S28
India unveils MANAV Vision as new global pathway for ethical AI — Narendra Modipresentedthe new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-cent…
S29
https://app.faicon.ai/ai-impact-summit-2026/building-the-workforce_-ai-for-viksit-bharat-2047 — But I would not squadron of these children going through. of the exhibitions, exhibits. As artificial intelligence becom…
S30
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you very much for that question Vikasji not only is this year the international year of women farmer but we know t…
S31
Responsible AI in India Leadership Ethics & Global Impact — Amol argues that as AI evolves from simple generative applications to more complex agentic systems, responsible AI imple…
S32
Keynote-Rishad Premji — He advocates for a practical approach where “models designed for specific processes or decisions tend to deliver the mos…
S33
Leveraging AI4All_ Pathways to Inclusion — Creating these incentives we believe is very important to be able to scale inclusion through AI deployments. The last pa…
S34
Leveraging AI4All_ Pathways to Inclusion — healthcare, finance, education, urban planning, but I’m going to only focus for a few for this particular evening. First…
S35
Climate change and Technology implementation | IGF 2023 WS #570 — João Vitor Andrade:Hi, everyone. I’d like to thank you all to be present here today. My name is João Vitor, I’m from Bra…
S36
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S37
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S38
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — The conversation evolved from abstract principles to practical implementation challenges, ultimately revealing that the …
S39
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chri…
S40
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Incidents such as the arrest of a young man near Windsor Castle, who was influenced by his AI assistant to harm the Quee…
S41
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Tawfik Jelassi:Thank you, Pratik. Good morning, all excellencies, esteemed guests, vicious participants. I’m very please…
S42
Welcome Address — India positions itself as a central hub of technology talent, leveraging a strong IT background and dynamic startup ecos…
S43
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S44
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S45
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S46
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S47
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Advising country partners to consider environmental implications in digitalization is a key recommendation. Technology s…
S48
WSIS Plus 20 Review: UN General Assembly High-Level Meeting – Comprehensive Summary — Climate Change and Sustainability UNCTAD acknowledges the significant environmental impact of the digital economy and t…
S49
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Environmental Impact and Climate Justice Moltzau argues that given the current climate crisis and multiple global chall…
S50
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S51
Open Forum #60 Cooperating for Digital Resilience and Prosperity — Luca Belli: Luca. Yeah, I will use my 30 seconds wisely. So just to reiterate a couple of points that I think are key he…
S52
Masterclass#1 — An extended analysis underscores the fundamental role of capacity building in the enhancement of organisational incident…
S53
Day 0 Event #188 Top Business and Technology Trends in Government for 2024 — Legacy systems, which are old computer systems used by government organizations, pose a significant challenge. Upgrading…
S54
Opening of the session — Capacity building is essential for political and institutional resource development. There is a need for reflecting cap…
S55
Closing remarks – Charting the path forward — Bouverot emphasizes that AI governance must address environmental concerns by incorporating sustainability measures. Thi…
S56
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Another perspective suggests that countries from the Global South are not prioritising sustainability and climate protec…
S57
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Environmental Impact and Climate Justice Moltzau argues that given the current climate crisis and multiple global chall…
S58
Building Climate-Resilient Systems with AI — Artificial intelligence | Environmental impacts Both speakers highlight agriculture as a major climate sector, with spe…
S59
Building Sovereign and Responsible AI Beyond Proof of Concepts — Green AIaddresses both environmental impact and economic viability. The speakers argued that these concerns are intrinsi…
S60
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S61
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Anne Le Henanf argues that as governments work to decarbonize, AI’s rapidly growing energy requirements pose a significa…
S62
AI’s growing role in environmental sustainability — AIis expandingrapidly, driving rising electricity and water consumption, which has fuelled concerns about environmental …
S63
Building the Workforce_ AI for Viksit Bharat 2047 — and partnerships from the Capacity Building Commission to deliver welcome remarks. Good afternoon. Thank you, Mustafa. A…
S64
https://dig.watch/event/india-ai-impact-summit-2026/building-the-workforce_-ai-for-viksit-bharat-2047 — Responsibility is to carve out trust -based collaborative ethical frameworks so that the demands of fast -paced dynamic …
S65
Open Forum: Liberating Science — It advocates for open discussions and debates, the accountability of politicians, transparency in science, and the ident…
S66
Leaders TalkX: Ethical Dimensions of the Information Society — Ana Neves from the United Nations Commission on Science and Technology for Development spoke about the importance of pub…
S67
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure — Majorannouncementsfrom Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI …
S68
Building Inclusive Societies with AI — This discussion focused on addressing the challenges faced by India’s informal workforce, which comprises 490 million wo…
S69
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Massive Workforce Development Challenge: The industry faces a critical shortage of approximately 1 million skilled worke…
S70
Shaping the Future AI Strategies for Jobs and Economic Development — But the good thing is humans want touch. So that’s good. But, you know, there will be a lot of revolution in terms of te…
S71
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Amandeep Singh Gil: Thank you. And thank you to you and to Abhishek for getting us together. I think there’s strong mome…
S72
Designing the AI Factory Scaling Compute to Sovereign AI — Impact:This observation elevated the discussion to policy analysis and international comparisons. It positioned India’s …
S73
Interim Report: — – Individuals – o Human dignity/value/agency (manipulation, deception, nudging, sentencing) – o Life, safety, security (…
S74
Policymaker’s Guide to International AI Safety Coordination — Impact:This analogy provided a tangible framework that other participants could relate to, moving the discussion from th…
S75
Building the Workforce_ AI for Viksit Bharat 2047 — Chairperson Shubhavi S. Radha Chauhan introduced the MANAV vision for human-centric AI governance, notably disclosing th…
S76
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S77
Building Population-Scale Digital Public Infrastructure for AI — The Brazilian experience illustrates the systemic nature of the challenge, requiring coordinated changes in legal framew…
S78
From India to the Global South_ Advancing Social Impact with AI — Disagreement level:Low level of disagreement with high convergence on AI’s transformative potential. Differences are pri…
S79
From India to the Global South_ Advancing Social Impact with AI — Low level of disagreement with high convergence on AI’s transformative potential. Differences are primarily tactical rat…
S80
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you. Perhaps to pick up on last point on humanity, because I think at the end it’s all about…
S81
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S82
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — The UN High Commissioner for Human Rights argues that AI systems should advance human rights by design, requiring alloca…
S83
Building the Workforce_ AI for Viksit Bharat 2047 — Dr. Washima argued that the fast-paced demands of the AI era require well-designed collaborative frameworks built on tru…
S84
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S85
Building the Workforce_ AI for Viksit Bharat 2047 — And to govern at this scale, this space, we need an ever -evolving system. So India built one. In 2020, our leader, Prim…
S86
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Tawfik Jelassi:Thank you, Pratik. Good morning, all excellencies, esteemed guests, vicious participants. I’m very please…
S87
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Countries should create customized national strategies for capacity building in AI and cybersecurity. These strategies s…
S88
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S89
AI Innovation in India — This comment energized the discussion by providing a grand vision that contextualized all the individual innovations wit…
S90
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S91
Driving Indias AI Future Growth Innovation and Impact — Manish Gupta argues that India needs to transition from being primarily a consumer of technology to becoming a creator a…
S92
Global Forum on the Ethics of AI 2025 — Building on previous forums in Prague and Ljubljana, this year’s event will focus on AI’s impact on human rights, gender…
S93
WS #110 AI Innovation Responsible Development Ethical Imperatives — Promote international collaboration through shared vocabulary, ethical frameworks, and interoperable standards
S94
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — International cooperation and knowledge sharing are crucial for scaling capacity building efforts, particularly for deve…
S95
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Advising country partners to consider environmental implications in digitalization is a key recommendation. Technology s…
S96
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — An expert panel convened to examine the complex relationship between Information and Communication Technologies (ICTs) a…
S97
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S98
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Artificial intelligence | Environmental impacts
S99
Ministerial Roundtable — ## Opening Remarks and Framework Setting
S100
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S101
Parliamentary Roundtable Safeguarding Democracy in the Digital Age Legislative Priorities and Policy Pathways — The discussion maintained a serious but collaborative tone throughout. It began with formal opening remarks emphasizing …
S102
Keynote-Jeet Adani — Overall Tone:The tone was consistently aspirational, patriotic, and strategic throughout. Jeet Adani maintained a confid…
S103
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Respected Honorable Chairman, Distinguished Speakers, Eminent Guests, Colleagues and Participants. It is my privilege to…
S104
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — ## Major Discussion Points: The discussion maintained a professional, collaborative tone throughout, characterized by c…
S105
Prosperity Through Data Infrastructure — Additionally, it highlights the challenge of legacy systems in digitalisation and the need for investment in both techno…
S106
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — This comment emphasizes the critical importance of collaboration while also pushing for concrete actions rather than jus…
S107
Day 0 Event #188 Top Business and Technology Trends in Government for 2024 — 2. Internal challenges: Legacy IT systems, which are costly and difficult to upgrade, pose a significant obstacle to mod…
S108
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S109
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S110
The Role of Government and Innovators in Citizen-Centric AI — The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about AI’…
S111
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S112
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S113
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S114
Women, peace and security — The overall tone was one of concern and urgency. Many speakers expressed alarm at negative trends and backsliding on wom…
S115
The State of Digital Fragmentation (Digital Policy Alert) — The tension has reached a significant point.
S116
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more …
S117
[Parliamentary Session Closing] Closing remarks — [Parliamentary Session Closing] Closing remarks
S118
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S119
Any other business /Adoption of the report/ Closure of the session — The statement offers a sense of success and a forward-looking optimism, referencing a soon-to-occur resumed session. Thi…
S120
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S121
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S122
AI Policy Summit Opening Remarks: Discussion Report — “The only way you could see that he was communicating with us is that there was a little bit of a tear coming out of his…
S123
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — Feleti Teo: Mr. President, Excellencies, it is with absolute and immense honour that I, as the head of government for …
S124
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S125
AI for Good Technology That Empowers People — Social and economic development | Artificial intelligence | Information and communication technologies for development
S126
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — “I look of what happened in mobile in India, and one of the largest data consumption per user in mobile devices in the w…
S127
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ammari highlighted META’s open-source approach to large language models, explaining, “META has adopted an open source me…
S128
Developing capacities for bottom-up AI in the Global South: What role for the international community? — The discussion explored alternatives to mainstream Western AI approaches. Gurumurthy highlighted the BRICS AI declaratio…
S129
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — – Marco Zennaro- Adham Abouzied Balance between large foundational models and small specialized models Development | I…
S130
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — Shekhar emphasised that this transformation necessitates three critical strategies for effective response. First, organi…
S131
DC-SIG Involving Schools of Internet Governance in achieving SDGs | IGF 2023 — Despite some participants initially being absent, Avri Doria ensured that the session followed the outlined agenda. She …
S132
WSIS+20 High-Level Dialogue: WSIS Legacy in Motion: Honoring the Past, Shaping the Future — Garza advocates for comprehensive policy approaches that ensure legal certainty and enable trusted international data ex…
S133
Driving Social Good with AI_ Evaluation and Open Source at Scale — This panel discussion at an AI and digital development summit brought together experts from diverse backgrounds to exami…
S134
Open Internet Inclusive AI Unlocking Innovation for All — With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivo…
S135
Global challenges for the governance of the digital world — Coordination, the third bucket, detailed the imperative for maintaining a cohesive and collaborative discourse across nu…
S136
The Global Power Shift India’s Rise in AI &amp; Semiconductors — Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She br…
S137
https://app.faicon.ai/ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays…
S138
China and India adopt contrasting approaches to AI governance — As AIbecomes centralto business strategy, questions of corporate governance and regulation are gaining prominence. The s…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Washima
1 argument122 words per minute251 words122 seconds
Argument 1
Emphasizes the need for trust‑based ethical frameworks and collective policy discourse (Dr. Washima)
EXPLANATION
Dr. Washima stresses that the rapid evolution of AI demands trust‑based collaborative ethical frameworks and a collective policy discourse to ensure safe, equitable public services. He links this responsibility to the summit’s theme of AI for economic development and social good.
EVIDENCE
He states that the responsibility is to “carve out trust-based collaborative ethical frameworks” and that “the need for collective discourse at the policy level is crucial” to harness equitable benefits, mitigate risks, and ensure inclusive governance transformation [10-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit transcript stresses the need for trust‑based collaborative ethical frameworks and collective policy discourse, which is corroborated by external sources S1 and S2 that highlight the same requirement for safe, equitable public services.
MAJOR DISCUSSION POINT
Trust‑based ethical frameworks and policy discourse
AGREED WITH
Shubhavi S. Radha Chauhan, Robin Scott, Dr. Jitendra Singh
S
Shubhavi S. Radha Chauhan
1 argument115 words per minute886 words460 seconds
Argument 1
Announces the “Mani Vision” for human‑centric, accountable AI and advocates sector‑specific small language models (Shubhavi S. Radha Chauhan)
EXPLANATION
She introduces the “Mani Vision”, a human‑centric framework for ethical, accountable and inclusive AI governance, and argues that future AI will shift from massive monolithic models to small, context‑specific, decentralized language models. This approach aims to enable localized, sector‑specific AI deployment.
EVIDENCE
She references the Prime Minister’s outline of “Mani Vision, a human-centric framework for ethical, accountable and inclusive AI governance” [23] and notes that “the future of AI… will be in small language models, context-specific, sectoral, and decentralized” [30-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “Mani Vision” human‑centric framework is referenced in external source S1, and a related “MANAV Vision” is described in S28, providing supporting evidence for the announced initiative.
MAJOR DISCUSSION POINT
Mani Vision and sector‑specific small language models
AGREED WITH
Subramanian Ramadorai, Anil Shivastava
S
Subramanian Ramadorai
2 arguments140 words per minute1223 words520 seconds
Argument 1
Frames AI as a movement to elevate humanity and proposes India’s third‑way partnership model (Subramanian Ramadorai)
EXPLANATION
Ramadorai portrays AI as a transformative movement that should elevate humanity rather than merely scale technology. He proposes India’s “third‑way” partnership model, positioned between US market‑led and China state‑led approaches, leveraging India’s large IT workforce.
EVIDENCE
He remarks that “the most important question for this summit is not how far we can scale AI but how we can recognize it’s a movement in a direction that elevates humanity” [68-69] and describes a “third way” partnership for India, distinct from US and China, backed by India’s 5.8 million IT professionals [74-91]. He also cites the Tata AI Saki Immersion Programme and the potential of small language models for edge devices [92-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External source S1 quotes the speaker saying the summit’s key question is how AI can “elevate humanity” and mentions India’s “third‑way” partnership model, directly supporting this argument.
MAJOR DISCUSSION POINT
AI as a movement and India’s third‑way partnership
AGREED WITH
Guilherme Albusco Almeida, Robin Scott, Speaker 3
Argument 2
Highlights the Tata AI Saki Immersion Programme empowering rural women and the potential of edge‑device small language models for local needs (Subramanian Ramadorai)
EXPLANATION
Ramadorai showcases a concrete example of AI empowerment through the Tata AI Saki Immersion Programme, which equips rural women artists with AI tools for livelihood opportunities, and emphasizes the relevance of small, domain‑specific language models that can run on edge devices in rural contexts.
EVIDENCE
He mentions that “the Tata AI Saki Immersion Programme is empowering rural women artists to use AI as a tool for livelihood opportunity” and that “the next billion AI users may interact with tiny embedded AI in phones, tractors, classrooms, clinics and local government systems” with small language models suitable for rural and edge environments [92-98].
MAJOR DISCUSSION POINT
AI empowerment programs and edge‑device models for public services
AGREED WITH
Shubhavi S. Radha Chauhan, Anil Shivastava
A
Anil Shivastava
3 arguments145 words per minute671 words275 seconds
Argument 1
Argues that AI cannot be simply layered onto legacy systems; it requires re‑engineering, contextual data, multilingual support, and new security safeguards (Anil Shivastava)
EXPLANATION
Shivastava contends that AI cannot be added as a superficial layer on existing legacy IT systems. Effective AI integration demands re‑engineering of architecture, provision of contextual data, multilingual capabilities, and strengthened security and data‑sovereignty safeguards.
EVIDENCE
He explains that “AI is not a layer that you could just put on existing systems” and that legacy systems have “silos of data, silos of business logic” whereas AI needs contextual data for training [122-129]. He adds the need for multilingual support for ASHA workers [133-135] and highlights security and data-sovereignty considerations requiring new safeguards [136-138].
MAJOR DISCUSSION POINT
Need for system redesign and security safeguards
AGREED WITH
Subramanian Ramadorai, Shubhavi S. Radha Chauhan, Robin Scott
Argument 2
States Google’s commitment to achieve carbon‑neutral data centres worldwide by 2030 and to develop energy‑efficient models (Anil Shivastava)
EXPLANATION
Shivastava reports that Google has pledged to make all its data centres carbon‑neutral by 2030 and is working on energy‑efficient AI models, offering to partner with the Indian government to set similar targets locally.
EVIDENCE
He notes, “Google, first of all, has committed that by 2030, all our data centers will be carbon neutral” and that they aim to ensure Indian data centres meet carbon-neutral targets, also mentioning the possibility of building energy-efficient models [215-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Google’s pledge for carbon‑neutral data centres by 2030 and work on energy‑efficient AI models is documented in external source S1.
MAJOR DISCUSSION POINT
Corporate carbon‑neutral commitments for AI infrastructure
AGREED WITH
Robin Scott, Guilherme Albusco Almeida
Argument 3
Notes AI support for ASHA workers in native languages, illustrating practical deployment in frontline services (Anil Shivastava)
EXPLANATION
Shivastava highlights that AI can be deployed to assist ASHA health workers in delivering services in their native languages, demonstrating the importance of multilingual AI for frontline public service delivery.
EVIDENCE
He states that “ASHA worker could actually sort of go and can deliver services using AI in their own native language” emphasizing the need for multilingual support [133-135].
MAJOR DISCUSSION POINT
Multilingual AI for frontline health workers
AGREED WITH
Shubhavi S. Radha Chauhan, Subramanian Ramadorai
G
Guilherme Albusco Almeida
2 arguments121 words per minute613 words302 seconds
Argument 1
Proposes Brazil‑India R&D cooperation, shared capacity‑building platforms, and South‑South collaboration on AI ethics (Guilherme Albusco Almeida)
EXPLANATION
Almeida advocates for deep Brazil‑India collaboration in AI research and development, shared capacity‑building platforms, and South‑South partnerships on AI ethics, noting complementary strengths and existing training initiatives in both countries.
EVIDENCE
He mentions that Brazil and India are “really close and can collaborate a lot” and cites opportunities in “R&D” and “capacity building” through similar organizations, training civil servants, and a partnership with Apolitical to build AI knowledge for public service [147-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External source S1 notes strong Brazil‑India collaboration opportunities in research, capacity building, and ethical framework development, aligning with the proposed partnership.
MAJOR DISCUSSION POINT
Brazil‑India R&D and capacity‑building partnership
AGREED WITH
Subramanian Ramadorai, Robin Scott, Speaker 3
Argument 2
Describes “green AI” initiatives and AI applications for climate policy, such as forest‑monitoring systems in Brazil (Guilherme Albusco Almeida)
EXPLANATION
Almeida outlines two strands of green AI: improving the energy efficiency of AI hardware and using AI to enhance climate policy, exemplified by a Brazilian system that monitors forests on private lands to detect deforestation.
EVIDENCE
He refers to “green AI” focusing on sustainable GPU power [205-207] and explains that Brazil uses AI to analyze forest data in the Rural Environmental Registry, detecting illegal logging and supporting reforestation [208-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of “green AI” and its use for climate policy, including a Brazilian forest‑monitoring system for detecting illegal logging, is detailed in external source S2.
MAJOR DISCUSSION POINT
Green AI and AI for climate policy
AGREED WITH
Robin Scott, Anil Shivastava
R
Robin Scott
3 arguments150 words per minute452 words180 seconds
Argument 1
Highlights global networks (e.g., Apolitical) and the need for a shared AI impact framework; stresses capacity building as an engine of innovation (Robin Scott)
EXPLANATION
Scott emphasizes the importance of global networks such as Apolitical in building AI capacity for public servants and argues that capacity building should be viewed as a driver of innovation, calling for a shared AI impact framework to guide public sector adoption.
EVIDENCE
She notes partnerships with Apolitical and a “coalition of willing organizations building knowledge for AI in public service” and stresses that capacity building is “an engine of innovation” [165-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External source S1 mentions partnerships with Apolitical and describes capacity building as an engine of innovation, supporting the call for a shared AI impact framework.
MAJOR DISCUSSION POINT
Global networks and shared AI impact framework
AGREED WITH
Guilherme Albusco Almeida, Subramanian Ramadorai, Speaker 3
Argument 2
Presents survey data showing low awareness of ethical frameworks, lack of evaluation for pilots, but high optimism and large productivity potential (Robin Scott)
EXPLANATION
Scott shares findings from an 8,000‑person global survey indicating that only a small fraction of public servants understand their government’s AI ethical frameworks, many lack evaluation plans for pilots, yet there is strong optimism about AI’s productivity benefits.
EVIDENCE
She reports that only 26 % understand ethical frameworks, 75 % are “freestyling”, 72 % plan pilots but only 45 % have evaluation plans, and over 90 % are optimistic about AI’s role, with a $1.75 trillion productivity prize cited [181-194].
MAJOR DISCUSSION POINT
Survey‑based gaps in AI readiness and ethics
Argument 3
Introduces a dedicated AI‑and‑climate course developed with the Stanford Doerr School of Sustainability (Robin Scott)
EXPLANATION
Scott announces the creation of a specialized course that links AI concepts with climate sustainability, developed in collaboration with Stanford’s Doerr School of Sustainability, to educate stakeholders on responsible AI‑climate intersections.
EVIDENCE
She states, “we have developed a course on AI and climate and understanding the links with the Stanford Doerr School of Sustainability” [201-203].
MAJOR DISCUSSION POINT
AI‑and‑climate educational program
AGREED WITH
Guilherme Albusco Almeida, Anil Shivastava
S
Speaker 3
1 argument77 words per minute9 words6 seconds
Argument 1
Calls for a generic, collaborative international AI impact assessment tool to harmonize frameworks across countries (Speaker 3)
EXPLANATION
Speaker 3 urges the creation of a generic, collaborative tool that can assess AI impact internationally, aiming to harmonize procurement, ethical, and competence frameworks across nations.
EVIDENCE
In a question, the speaker suggests “maybe we need to have something a more generic collaborative international AI impact framework assessment tool” to address the lack of unified frameworks [246].
MAJOR DISCUSSION POINT
Need for generic international AI impact assessment tool
AGREED WITH
Guilherme Albusco Almeida, Subramanian Ramadorai, Robin Scott
S
Speaker 1
1 argument114 words per minute284 words149 seconds
Argument 1
Outlines Mission Kalmaragi, the national digital learning platform, AI‑enabled government services, and a global alliance for inclusive capacity building (Speaker 1)
EXPLANATION
Speaker 1 presents Mission Kalmaragi as India’s national digital learning platform that supports AI‑enabled government services and announces a Digital Capacity Building Alliance designed to provide a non‑proprietary, inclusive framework for capacity building across nations.
EVIDENCE
He describes Mission Kalmaragi, the Kalmaragi platform, AI-enabled governments, and the proposal for a Digital Capacity Building Alliance involving governments, industry, academia, civil society, and startups, emphasizing trust, empathy and inclusion [251-269].
MAJOR DISCUSSION POINT
Launch of Digital Capacity Building Alliance and Mission Kalmaragi
D
Dr. Jitendra Singh
1 argument144 words per minute2236 words927 seconds
Argument 1
Emphasizes dynamic governance, the importance of integrity, hybrid public‑private models, and support for the newly launched alliance (Dr. Jitendra Singh)
EXPLANATION
Dr. Singh underscores that both governance and capacity building are dynamic processes, stresses the centrality of integrity and hybrid public‑private collaborations, and endorses the newly launched Digital Capacity Building Alliance as a vehicle for responsible AI integration.
EVIDENCE
He remarks that governance and capacity building are “dynamic” and highlights integrity, hybrid models, and the alliance’s role in embedding AI responsibly, noting reforms such as removal of 2,000 rules and the alliance’s potential to drive inclusive AI adoption [292-306][311-322].
MAJOR DISCUSSION POINT
Dynamic governance, integrity, and support for alliance
M
Moderator
1 argument96 words per minute632 words393 seconds
Argument 1
Facilitates the discussion, introduces panelists, and links capacity‑building to AI policy goals (Moderator)
EXPLANATION
The Moderator steers the session by thanking previous speakers, introducing the chairperson and panelists, and framing the discussion within the broader AI policy and capacity‑building objectives of the summit, later prompting the launch of the alliance blueprint.
EVIDENCE
He thanks Dr. Washima, invites the chairperson, introduces panelists such as Professor Amogarai, Guilherme, Anil and Robin, and later asks participants to stand for photographs after the blueprint launch [16-17][42-45][271-275].
MAJOR DISCUSSION POINT
Moderation and linking capacity building to AI policy
A
Audience
1 argument144 words per minute364 words151 seconds
Argument 1
Warns of imminent risks (referencing the Doomsday Clock) and urges decisive action on AI governance by 2047, criticizing procrastination (Audience)
EXPLANATION
Audience members invoke the Doomsday Clock to illustrate existential risk, describe the situation as a precipice, and demand that India act decisively on AI governance by 2047, condemning current procrastination.
EVIDENCE
They reference the Doomsday Clock at 85 seconds to midnight, describe an “overloaded truck with weapons of mass destruction” and a “precipice” metaphor, and ask whether Bharat should act by 2047, labeling inaction as procrastination [227-234][236-244].
MAJOR DISCUSSION POINT
Urgency and accountability in AI governance
Agreements
Agreement Points
Need for trust‑based ethical frameworks and collective policy discourse
Speakers: Dr. Washima, Shubhavi S. Radha Chauhan, Robin Scott, Dr. Jitendra Singh
Emphasizes the need for trust‑based ethical frameworks and collective policy discourse (Dr. Washima) Announces the “Mani Vision” for human‑centric, accountable AI (Shubhavi S. Radha Chauhan) Presents survey data showing low awareness of ethical frameworks (Robin Scott) Emphasizes dynamic governance, the importance of integrity, and support for the newly launched alliance (Dr. Jitendra Singh)
All four speakers stress that AI deployment must be grounded in trustworthy, ethical and accountable frameworks, requiring collective policy dialogue and integrity at the governance level [10-13][23-24][181-187][292-306].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for trust-based collaborative ethical frameworks mirrors discussions in the Open Forum where participants emphasized carving out such frameworks for fast-paced AI deployment [S64], and aligns with UN-CTSD guidance that ethical considerations must be woven into public policy to ensure equitable access [S66].
Shift to sector‑specific, small, context‑specific language models and multilingual AI
Speakers: Shubhavi S. Radha Chauhan, Subramanian Ramadorai, Anil Shivastava
Announces the “Mani Vision” for human‑centric, accountable AI and advocates sector‑specific small language models (Shubhavi S. Radha Chauhan) Highlights the Tata AI Saki Immersion Programme empowering rural women and the potential of edge‑device small language models for local needs (Subramanian Ramadorai) Notes AI support for ASHA workers in native languages, illustrating practical deployment in frontline services (Anil Shivastava)
The speakers converge on the view that future AI should move away from monolithic models toward small, decentralized, sector-specific models that support local languages and frontline services [30-32][96-98][133-135].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent initiatives highlight the move toward small, multilingual AI solutions for low-resource languages, exemplified by the release of African language datasets and partnership-led models [S76], and broader consensus on inclusive, multilingual AI development from the India-Global South dialogue [S78].
Capacity building is essential and requires re‑engineering of legacy systems
Speakers: Anil Shivastava, Subramanian Ramadorai, Shubhavi S. Radha Chauhan, Robin Scott
Argues that AI cannot be simply layered onto legacy systems; it requires re‑engineering, contextual data, multilingual support, and new security safeguards (Anil Shivastava) Capacity building must therefore focus on enabling our officials to deconstruct complexities, impose appropriate guardrails on data and its use (Subramanian Ramadorai) Capacity building must therefore focus on enabling our officials to deconstruct complexities (Shubhavi S. Radha Chauhan) Highlights global networks (e.g., Apolitical) and the need for a shared AI impact framework; stresses capacity building as an engine of innovation (Robin Scott)
All four agree that effective AI adoption demands systematic capacity-building programmes, including redesign of existing IT architectures, data-centric guardrails and multilingual capabilities, and that such capacity building drives innovation [122-138][33-34][33][165-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity building is identified as a cornerstone for robust critical information infrastructure and SDG-9 achievement [S52]; legacy government systems are flagged as a major barrier that must be upgraded to meet modern AI demands [S53]; policy texts repeatedly stress prioritising capacity building in adopted frameworks [S54].
International collaboration (Brazil‑India) and shared AI impact assessment tools
Speakers: Guilherme Albusco Almeida, Subramanian Ramadorai, Robin Scott, Speaker 3
Proposes Brazil‑India R&D cooperation, shared capacity‑building platforms, and South‑South collaboration on AI ethics (Guilherme Albusco Almeida) Frames AI as a movement to elevate humanity and proposes India’s third‑way partnership model (Subramanian Ramadorai) Highlights global networks (e.g., Apolitical) and the need for a shared AI impact framework; stresses capacity building as an engine of innovation (Robin Scott) Calls for a generic, collaborative international AI impact assessment tool to harmonize frameworks across countries (Speaker 3)
The speakers converge on the importance of deep Brazil-India cooperation, broader South-South partnerships, and the creation of common, collaborative tools for assessing AI impact globally [147-165][74-76][165-166][246].
POLICY CONTEXT (KNOWLEDGE BASE)
Brazil’s experience of coordinated legal and institutional reforms, together with a joint “verifiable convention” with India, illustrates deep bilateral collaboration on AI impact assessment [S77]; the Paris AI Action Summit and subsequent Indian-hosted summit further underscore the momentum for shared international AI impact tools [S71].
Environmental sustainability and green AI
Speakers: Robin Scott, Guilherme Albusco Almeida, Anil Shivastava
Introduces a dedicated AI‑and‑climate course developed with the Stanford Doerr School of Sustainability (Robin Scott) Describes “green AI” initiatives and AI applications for climate policy, such as forest‑monitoring systems in Brazil (Guilherme Albusco Almeida) States Google’s commitment to achieve carbon‑neutral data centres worldwide by 2030 and to develop energy‑efficient models (Anil Shivastava)
All three highlight the need to align AI development with climate responsibility, through education, green-AI hardware practices, and corporate carbon-neutral pledges [201-203][205-213][215-218].
POLICY CONTEXT (KNOWLEDGE BASE)
AI governance frameworks are increasingly required to embed environmental safeguards, as highlighted in calls for green AI that couples sustainability with economic viability [S55, S59]; the substantial energy and carbon footprint of large‑scale models is documented in analyses of green AI challenges [S60, S61].
Similar Viewpoints
All stress that AI governance must be built on trustworthy, ethical, and accountable foundations, requiring coordinated policy action and integrity at the highest levels [10-13][23-24][181-187][292-306].
Speakers: Dr. Washima, Shubhavi S. Radha Chauhan, Robin Scott, Dr. Jitendra Singh
Emphasizes the need for trust‑based ethical frameworks and collective policy discourse (Dr. Washima) Announces the “Mani Vision” for human‑centric, accountable AI (Shubhavi S. Radha Chauhan) Presents survey data showing low awareness of ethical frameworks (Robin Scott) Emphasizes dynamic governance, the importance of integrity, and support for the newly launched alliance (Dr. Jitendra Singh)
Both argue that AI must be localized, multilingual and usable on low‑resource edge devices to serve frontline public servants and rural populations effectively [133-135][92-98].
Speakers: Anil Shivastava, Subramanian Ramadorai
Notes AI support for ASHA workers in native languages, illustrating practical deployment in frontline services (Anil Shivastava) Highlights the Tata AI Saki Immersion Programme empowering rural women and the potential of edge‑device small language models for local needs (Subramanian Ramadorai)
Both emphasize the role of global networks and collaborative platforms to build AI capacity across governments, especially through South‑South partnerships [147-165][165-166].
Speakers: Guilherme Albusco Almeida, Robin Scott
Proposes Brazil‑India R&D cooperation, shared capacity‑building platforms, and South‑South collaboration on AI ethics (Guilherme Albusco Almeida) Highlights global networks (e.g., Apolitical) and the need for a shared AI impact framework; stresses capacity building as an engine of innovation (Robin Scott)
Unexpected Consensus
Alignment of AI development with climate and environmental responsibility
Speakers: Robin Scott, Guilherme Albusco Almeida, Anil Shivastava
Introduces a dedicated AI‑and‑climate course developed with the Stanford Doerr School of Sustainability (Robin Scott) Describes “green AI” initiatives and AI applications for climate policy, such as forest‑monitoring systems in Brazil (Guilherme Albusco Almeida) States Google’s commitment to achieve carbon‑neutral data centres worldwide by 2030 and to develop energy‑efficient models (Anil Shivastava)
Despite coming from different sectors-academia/NGO (Robin), a Brazilian public-sector representative (Guilherme), and a private-sector tech leader (Anil)-all three converge on the need for AI to be environmentally sustainable, an alignment not explicitly mandated elsewhere in the agenda [201-203][205-213][215-218].
POLICY CONTEXT (KNOWLEDGE BASE)
Thought leaders argue that AI deployments must prioritize climate justice and sustainability, linking responsible AI to climate goals [S57]; AI is also being positioned as a tool for building climate-resilient systems, especially in agriculture and smallholder contexts [S58]; broader discourse stresses AI’s dual role in mitigating environmental impact while driving sustainability initiatives [S62].
Overall Assessment

The panel displayed strong consensus on four core pillars: (1) the necessity of trust‑based, ethical AI governance; (2) the shift toward small, sector‑specific, multilingual models; (3) the centrality of capacity building and system re‑engineering; (4) the importance of international, especially Brazil‑India, collaboration; and (5) the imperative to embed environmental sustainability into AI development.

High consensus across most speakers, indicating a shared vision that can drive coordinated policy actions, joint research programmes, and joint funding mechanisms. The alignment suggests that future initiatives—such as the Digital Capacity Building Alliance—are likely to receive broad support, though implementation details (e.g., evaluation frameworks, financing) remain to be fleshed out.

Differences
Different Viewpoints
Approach to integrating AI into existing public service systems
Speakers: Anil Shivastava, Subramanian Ramadorai
AI cannot be simply layered onto legacy systems; it requires re‑engineering, contextual data, multilingual support, and new security safeguards (Anil Shivastava) AI can be deployed via small, domain‑specific language models on edge devices, enabling rapid adoption without massive system overhaul (Subramanian Ramadorai)
Anil argues that effective AI integration demands a deep redesign of legacy IT architectures, contextual data pipelines and strengthened security measures [122-129][136-138]. Subramanian counters that the next wave of AI will consist of lightweight, sector-specific models that can run on phones, tractors and local government systems, implying a less invasive integration path [92-98].
POLICY CONTEXT (KNOWLEDGE BASE)
Legacy public-service IT infrastructures are widely recognised as a bottleneck for AI integration, requiring costly upgrades and re-engineering [S53]; discussions on designing AI-enabled public services stress the need for well-informed, trust-based delivery models [S64].
Urgency and timeline for AI governance and action in India
Speakers: Audience, Subramanian Ramadorai
Audience warns of imminent existential risk and urges decisive AI governance action by 2047, criticizing current procrastination (Audience) Subramanian emphasizes the need for a realistic roadmap and warns against waiting for a symbolic 2047 target, advocating concrete steps now (Subramanian Ramadorai)
Audience members invoke the Doomsday Clock and demand that Bharat act decisively by 2047, labeling inaction as procrastination [227-234][236-244]. Subramanian responds that the focus should be on building a practical roadmap rather than waiting for a future milestone, noting that “confrontation starts very early” and a plan is needed now [244].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Action Summit timeline and India’s positioning as a leader in sovereign AI underscore pressures for rapid governance action [S71, S72]; however, debates persist over the appropriate speed and sequencing of policy implementation.
Need for a generic, collaborative international AI impact assessment tool
Speakers: Speaker 3, Robin Scott
Calls for a generic, collaborative international AI impact framework assessment tool to harmonize procurement, ethical and competence frameworks across countries (Speaker 3) Emphasizes building a coalition of willing organisations (e.g., Apolitical) for AI capacity building but does not specifically propose a generic assessment tool (Robin Scott)
Speaker 3 proposes creating a unified international AI impact assessment tool to address fragmented frameworks [246]. Robin, while highlighting the importance of global networks and capacity-building coalitions, focuses on training and surveys rather than a single assessment instrument, indicating a different preferred solution [165-166][181-194].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a universal AI impact assessment framework echo the collaborative agenda set out at the AI Action summit and the broader push for coordinated international safety standards [S71, S74].
Unexpected Differences
Gendered language and tone from a senior minister versus inclusive discourse
Speakers: Dr. Jitendra Singh, Dr. Washima
Dr. Jitendra Singh remarks that women in the room must listen to his guidance (Dr. Jitendra Singh) Dr. Washima opens with inclusive welcome to distinguished guests and emphasizes collaborative responsibility (Dr. Washima)
While the broader discussion centers on AI governance and capacity building, Dr. Singh’s comment that “women in the room have to listen to my guidance” [289-290] contrasts sharply with Dr. Washima’s inclusive opening and emphasis on collective responsibility [3][4], revealing an unexpected gender-related tension not directly tied to the technical agenda.
Repeated emphasis on the size of India’s IT workforce without substantive elaboration
Speakers: Subramanian Ramadorai
Subramanian repeatedly states “We know we have 5.8 million professionals” many times (Subramanian Ramadorai)
The transcript shows an unusually high number of repetitions of the same statistic (sentences [77-91]), which is atypical for a policy discussion and may indicate a lack of substantive argumentation rather than a genuine point of contention.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent reports on India’s AI ambitions highlight massive investment inflows and reference the large IT talent pool as a strategic asset, yet critique the lack of detailed workforce development plans [S67]; parallel discussions note a global shortage of skilled workers and the need for broad talent development beyond headline numbers [S69].
Overall Assessment

The panel largely converged on the importance of ethical, trustworthy AI and the necessity of capacity building, but diverged on how to integrate AI into existing systems, the urgency of governance actions, and the preferred mechanisms for international coordination. These disagreements reflect differing priorities between technical redesign, rapid deployment of lightweight models, and the pace of policy implementation.

Moderate – while there is strong consensus on overarching goals, the varied viewpoints on implementation pathways and timelines could slow coordinated action unless reconciled. The implications are that policy design must accommodate both deep system re‑engineering and fast‑track edge AI deployments, and must balance immediate urgency with realistic road‑mapping.

Partial Agreements
All speakers concur that ethical, trustworthy AI governance is essential, but differ on the mechanisms: Dr. Washima calls for trust‑based collaborative frameworks [10-13]; Shubhavi proposes a sector‑specific human‑centric vision [23][30-32]; Robin points to gaps in awareness and evaluation needing better monitoring [181-194]; Anil stresses technical safeguards and re‑engineering [136-138]; Dr. Singh emphasizes integrity and hybrid models [292-306].
Speakers: Dr. Washima, Shubhavi S. Radha Chauhan, Robin Scott, Anil Shivastava, Dr. Jitendra Singh
Emphasizes the need for trust‑based collaborative ethical frameworks (Dr. Washima) Announces the “Mani Vision” human‑centric, accountable AI framework (Shubhavi S. Radha Chauhan) Highlights low awareness of ethical frameworks and need for evaluation (Robin Scott) Notes security and data‑sovereignty safeguards for AI (Anil Shivastava) Stresses integrity and hybrid public‑private models in AI governance (Dr. Jitendra Singh)
All agree that AI should be leveraged for national development and that capacity building is central, yet they propose different pathways: Dr. Washima stresses policy dialogue [10-13]; Shubhavi focuses on small, sector‑specific models [30-32]; Subramanian promotes a partnership model leveraging India’s IT workforce [74-91]; Anil demands deep re‑engineering of legacy systems [122-129]; Robin highlights the need for global networks and evaluation mechanisms [165-166][181-194]; Speaker 1 announces a concrete alliance and platform to operationalise these goals [251-269].
Speakers: Dr. Washima, Shubhavi S. Radha Chauhan, Subramanian Ramadorai, Anil Shivastava, Robin Scott, Speaker 1
Calls for collective policy discourse to harness AI for economic development and social good (Dr. Washima) Advocates sector‑specific small language models to deliver localized AI services (Shubhavi) Frames AI as a movement to elevate humanity and proposes India’s third‑way partnership model (Subramanian Ramadorai) Insists AI cannot be a superficial layer and requires system redesign (Anil Shivastava) Shows capacity‑building as an engine of innovation and calls for shared impact frameworks (Robin Scott) Launches a Digital Capacity Building Alliance to provide a non‑proprietary, inclusive framework (Speaker 1)
Takeaways
Key takeaways
AI governance must be human‑centric, built on trust‑based ethical frameworks and collective policy discourse (Dr. Washima, Shubhavi S. Radha Chauhan). India’s “Mani Vision” and Mission Kalmaragi aim to embed AI in public services through sector‑specific small language models, multilingual support, and edge‑device deployment. Technical integration of AI requires re‑engineering legacy systems, contextual data pipelines, new security safeguards, and process redesign (Anil Shivastava). International South‑South collaboration (Brazil‑India) is seen as a strategic avenue for joint R&D, shared capacity‑building platforms, and harmonised AI ethics frameworks (Guilherme Albusco Almeida, Robin Scott). Significant gaps exist in public‑sector AI readiness: low awareness of ethical frameworks, insufficient pilot evaluation, despite high optimism and large productivity potential (Robin Scott). Environmental sustainability is a priority: development of AI‑and‑climate curricula, green‑AI initiatives, and corporate commitments to carbon‑neutral data centres by 2030 (Robin Scott, Guilherme Albusco Almeida, Anil Shivastava). The Digital Capacity Building Alliance was officially launched, linking Mission Kalmaragi’s digital learning platform with a global network of governments, academia, industry, and civil society (Speaker 1, Dr. Jitendra Singh). Practical examples illustrate AI’s impact on frontline services, such as Tata’s AI‑Saki programme for rural women and AI‑assisted ASHA workers delivering services in native languages.
Resolutions and action items
Launch of the Digital Capacity Building Alliance (global public‑good model) announced by the Minister and the Capacity Building Commission. Commitment by Google to achieve carbon‑neutral data centres worldwide by 2030 and to collaborate with the Indian government on sustainable AI infrastructure (Anil Shivastava). Proposal to develop a generic, collaborative international AI impact assessment tool that can be localized to national contexts (Speaker 3 / Robin Scott). Agreement to pursue Brazil‑India joint R&D and capacity‑building initiatives, leveraging platforms such as Apolitical and existing civil‑service training programmes (Guilherme Albusco Almeida). Plan to create sector‑specific small language models and edge‑device AI solutions for local public‑service needs (Shubhavi S. Radha Chauhan, Anil Shivastava). Establishment of continuous learner feedback loops and personalized learning pathways within the Mission Kalmaragi framework (Shubhavi S. Radha Chauhan). Development of an AI‑and‑climate course in partnership with the Stanford Doerr School of Sustainability (Robin Scott).
Unresolved issues
Specific mechanisms, timelines, and funding structures for the Brazil‑India AI collaboration remain undefined. How to operationalise the proposed international AI impact assessment framework across diverse legal and institutional environments was not detailed. Concrete steps to improve public‑sector awareness of ethical frameworks and to institutionalise systematic pilot evaluation are still pending. The audience’s urgent call for decisive action by 2047 (referencing the Doomsday Clock) was acknowledged but no concrete roadmap was presented. Details on how the Digital Capacity Building Alliance will coordinate standards, data‑sharing, and governance among participating countries were not clarified.
Suggested compromises
Adoption of a hybrid model that blends AI automation with human‑in‑the‑loop oversight to preserve integrity and accountability (Dr. Jitendra Singh). Balancing rapid AI deployment with the need for ethical safeguards and multilingual, context‑specific solutions rather than monolithic models (Shubhavi S. Radha Chauhan). Combining ‘green AI’ (energy‑efficient models) with AI applications that support climate policy, thereby addressing both sustainability and utility (Guilherme Albusco Almeida). Encouraging public‑private partnerships that share expertise while maintaining regulatory independence, to avoid over‑reliance on either sector (Dr. Jitendra Singh).
Thought Provoking Comments
The future of AI, more precisely the agentic AIs, will not be in massive monolithic models. It will be in small language models, context‑specific, sectoral, and decentralized.
Shifts the narrative from large, generic AI models to localized, purpose‑built models, highlighting a strategic direction for AI deployment in public services.
Redirected the discussion toward the feasibility of edge AI and sector‑specific solutions. Subsequent speakers (e.g., Subramanian and Anil) expanded on the need for multilingual, small‑scale models for rural and frontline workers, deepening the technical conversation.
Speaker: Shubhavi S. Radha Chauhan
India offers a third way – a partnership model – between the US market‑led experimentation and China’s state‑led techno‑nationalism.
Challenges the prevailing binary view of AI geopolitics and proposes an alternative collaborative approach for India.
Prompted panelists to discuss India’s unique strengths (large talent pool, digital public infrastructure) and opened the floor for South‑South collaboration, leading directly to Guilherme’s remarks on Brazil‑India partnerships.
Speaker: Subramanian Ramadorai
According to our 8,000‑person global survey, only 26 % of public servants implementing AI say they understand their own government’s ethical frameworks – meaning 75 % are essentially freestyling. Moreover, while 72 % plan pilots, only 45 % have a concrete evaluation plan.
Provides concrete, data‑driven evidence of gaps in ethical awareness and evaluation rigor, moving the conversation from abstract ideals to measurable shortcomings.
Shifted the tone to a more urgent, problem‑focused discussion. It spurred calls for better training (echoed by Guilherme) and highlighted the need for systematic evaluation, influencing later remarks about capacity‑building curricula.
Speaker: Robin Scott
In Brazil we have four distinct capacity‑building tracks – senior leaders, IT managers, data curators, and general civil servants – each with tailored knowledge modules. A South‑South coalition, like the one with Apolitical, can scale this model globally.
Introduces a concrete, scalable framework for public‑sector AI training and emphasizes the power of South‑South cooperation.
Provided a practical template that other panelists referenced when discussing India’s own training pathways. It broadened the conversation to include international best practices and collaborative platforms.
Speaker: Guilherme Albusco Almeida
AI cannot simply be layered onto legacy systems. Existing silos of data and business logic must be re‑engineered, with contextual data, multilingual support, and security considerations built in from the ground up.
Challenges the assumption that AI integration is a plug‑and‑play exercise, highlighting deep technical and organizational changes required.
Steered the dialogue toward concrete implementation challenges, prompting further discussion on data sovereignty, edge deployment, and the need for new governance processes.
Speaker: Anil Shivastava
Capacity building is about learning to be a good learner. We must shed old rules – we have already removed about 2,000 – and create a dynamic, continuous learning loop between public and private sectors.
Frames capacity building as a cultural shift rather than a one‑off program, linking it to regulatory reform and the concept of a ‘digital public good.’
Reinforced the earlier points about continuous learning and led to the formal launch of the Digital Capacity Building Alliance, giving the discussion a concrete policy outcome.
Speaker: Dr. Jitendra Singh
We are at 80 meters to the edge of the precipice – the doomsday clock analogy – and must act now rather than procrastinate until 2047.
Uses a stark metaphor to underscore the urgency of AI governance and capacity‑building, challenging the panel’s longer‑term timelines.
Created a moment of heightened urgency, prompting the moderator and speakers to emphasize immediate actions, such as the launch of the alliance and the call for rapid implementation of ethical frameworks.
Speaker: Audience member (Prof. Charu)
Overall Assessment

The discussion was shaped by a series of pivotal interventions that moved it from high‑level rhetoric to concrete, actionable insights. Early remarks about decentralized, sector‑specific AI models set a technical direction that was fleshed out by Anil’s emphasis on re‑engineering legacy systems and multilingual deployment. Subramanian’s geopolitical framing introduced the notion of a ‘third way’ for India, which was operationalized through Guilherme’s South‑South capacity‑building blueprint. Robin’s data‑driven critique exposed critical gaps in ethical awareness and evaluation, sharpening the urgency conveyed later by the audience’s doomsday‑clock warning. Dr. Jitendra Singh’s cultural‑learning perspective and the formal launch of the Digital Capacity Building Alliance provided a policy anchor, turning the conversation into a concrete commitment. Collectively, these comments redirected the dialogue toward practical collaboration, highlighted systemic challenges, and culminated in a clear call to action, thereby giving the summit both strategic depth and immediate relevance.

Follow-up Questions
How can Brazil and India collaborate more closely to shape the global conversation around AI trust, alignment, and governance, and which specific collaboration areas would have the greatest impact as AI becomes more autonomous?
Identifies need for bilateral cooperation to influence global AI norms and leverage complementary strengths.
Speaker: Subramanian Ramadorai (question) / Guilherme Albusco Almeida (suggested areas)
What are the biggest gaps in AI readiness within public institutions worldwide, and how can the global conversation be shifted towards work reinvention?
Understanding readiness gaps and labor transformation is essential for effective AI adoption in government.
Speaker: Subramanian Ramadorai (question) / Robin Scott (provided gaps)
How can governments and AI companies work together to ensure that AI‑driven public infrastructure is aligned with climate responsibility, energy efficiency, and sustainable growth?
Addresses the environmental footprint of AI deployments and seeks joint strategies for green AI.
Speaker: Subramanian Ramadorai (question) / Robin Scott, Guilherme Albusco Almeida, Anil Shivastava (responses)
Should India achieve its AI and governance objectives by 2047, or is there a risk of procrastination in meeting these responsibilities?
Raises concern about timelines and accountability for India’s AI roadmap.
Speaker: Audience member (Professor Charu)
Can a generic, collaborative international AI impact framework assessment tool be developed to harmonize AI procurement guidelines, ethical frameworks, and competence frameworks across countries?
Calls for a standardized yet locally adaptable framework to guide AI implementation globally.
Speaker: Audience member (Professor Charu)
How can the effectiveness of AI pilots in government be systematically evaluated, given the gap between pilot implementation and evaluation planning?
Ensures that AI pilots generate measurable outcomes and inform scaling decisions.
Speaker: Robin Scott (identified gap)
What research is needed to develop sector‑specific, small language models that can run on edge devices for rural and underserved contexts?
Small, domain‑specific models are crucial for AI accessibility in low‑resource environments.
Speaker: Subramanian Ramadorai (statement)
What are the security and data‑sovereignty risks of layering AI onto legacy government systems, and how should they be mitigated?
Legacy integration poses vulnerabilities that must be addressed for safe AI adoption.
Speaker: Anil Shivastava (identified risk)
How can multilingual AI capabilities be built and deployed for frontline workers such as ASHA staff to deliver services in native languages?
Ensures inclusive service delivery and maximizes AI utility in diverse linguistic settings.
Speaker: Anil Shivastava (statement)
What mechanisms can be put in place to continuously capture and act on learner feedback loops to improve capacity‑building programs?
Effective feedback loops are essential for refining training and competency development.
Speaker: Shubhavi S. Radha Chauhan (mentioned)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Ethical AI_ Keeping Humanity in the Loop While Innovating

Ethical AI_ Keeping Humanity in the Loop While Innovating

Session at a glanceSummary, keypoints, and speakers overview

Summary

The UNESCO-sponsored panel “Humanity in the Loop” convened experts to examine how to balance AI innovation with ethical safeguards, emphasizing a human-centred approach especially for the Global South [1-3]. UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states, frames ethical AI around three non-negotiable pillars-human rights, human dignity and fundamental freedoms-and calls for these to guide development and deployment [20-25]. Maria Grazia highlighted the persistent gap between high-level principles and concrete practice, noting that translating the recommendation into on-the-ground actions remains a major challenge [35-37].


Dr Tawfik Jelassi argued that innovation and ethics are mutually reinforcing, insisting that ethical considerations must be embedded ex-ante (“by design”) and that the UNESCO recommendation already provides a global set of principles such as human oversight and non-discrimination [38-41]. Debjani Ghosh stressed that the real dilemma is how technology is used, not whether it is innovative, and called for human accountability, continuous oversight throughout the design-to-commercialisation cycle, and “ethics-by-design” safeguards rather than after-the-fact regulation [48-69]. Brando Benifei reinforced this view, describing the EU AI Act’s risk-based framework, its bans on high-risk uses such as predictive policing, and the need for trust-building transparency, while urging broader global cooperation on issues like military AI and existential risks [74-87][191-200].


Virginia Dignam critiqued the narrow “hammer” metaphor for AI, urging a broader toolbox that incorporates diverse cultural traditions (e.g., African Ubuntu) and calling for education that equips engineers to ask why a problem matters and who benefits before coding solutions [95-108][112-138]. Paula Goldman illustrated how Salesforce operationalises these ideas by embedding human-in-the-loop controls, real-time escalation mechanisms, and inclusive design features that improve accuracy and commercial success, demonstrating that ethical design is also profitable [140-159][220-226]. Debjani added that the AI Impact Commons, a repository of impact stories from over 30 countries, shows how developing nations are already leveraging AI for health, food security and climate resilience, though they need more resources to scale such solutions [161-174].


Dr Tawfik highlighted UNESCO’s grassroots work-providing community radios and early-warning systems in remote African villages-to show how AI-enabled information can transform lives when coupled with capacity-building and advocacy [204-214]. When asked about policy, Virginia defined AI policy as the set of tools, skills and knowledge needed to assess AI’s societal impact from the outset, rather than focusing solely on the technology itself [264-269]. The discussion concluded that achieving ethical, human-centred AI requires early-stage oversight, inclusive education, industry-government collaboration, and coordinated international governance to ensure that AI serves humanity rather than harms it [35-37][191-200][124-138].


Keypoints


Major discussion points


Ethics and innovation are not opposing forces; ethical AI must be built-in from the start.


Maria Grazia argues that UNESCO’s 2021 Recommendation shows “ethics… is something very concrete” and that innovation does not have to be sacrificed for ethics [20-27]. Dr. Tawfik Jelassi reinforces this, saying ethics and innovation “reinforce each other” and must be embedded “ex-ante” in design [38-41].


Turning high-level principles into actionable practice requires robust regulation and governance.


The panel highlights the gap between “principles and practice” [35-37] and stresses that regulation should be risk-based rather than a post-hoc barrier. Brando Benifei outlines the EU AI Act’s risk-based approach, the prohibition of certain uses (e.g., predictive policing), and the need for transparency to build trust [72-84][80-86].


Human-centered accountability and inclusive participation are essential, especially for the Global South.


Debjani Ghosh stresses that accountability remains with humans and that oversight must be built throughout the development lifecycle [61-68]. Virginia Dignam calls for expanding beyond a “Western Cartesian” view of AI, advocating for culturally diverse models (e.g., Ubuntu) and broader skill development [106-113][124-138].


Industry examples show practical pathways to ethical AI through human-in-the-loop design and inclusive product development.


Paula Goldman describes Salesforce’s concrete practices: monitoring AI outputs, escalation to humans, and designing for accessibility (e.g., handling diverse accents and disabilities) [140-158][220-227]. She argues that inclusive design not only serves ethical goals but also yields superior commercial performance [228-231].


Global cooperation and collective intelligence are required to address trans-national AI risks.


Both Brando Benifei and Dr. Jelassi call for multilateral frameworks to manage existential threats such as military AI, emphasizing UNESCO’s role in fostering peace through education, culture, and information [191-199][204-210].


Overall purpose / goal of the discussion


The session, titled “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI,” aimed to bring together UNESCO officials, policymakers, academics, and industry leaders to (1) reaffirm that ethical considerations can coexist with rapid AI innovation, (2) identify concrete mechanisms-regulatory, organizational, and educational-to translate UNESCO’s ethical principles into real-world practice, and (3) explore how global and multi-sector collaboration can ensure AI serves humanity, especially in the Global South.


Tone of the discussion


Opening: Formal and optimistic, with UNESCO leadership framing ethics as a concrete, universal foundation [20-27].


Mid-session: Becomes more critical and analytical, questioning gaps between policy and practice, and debating regulatory approaches [35-37][72-84].


Later: Shifts to a collaborative, solution-focused tone, highlighting concrete industry examples, inclusive education, and the promise of collective action [140-158][191-199][204-210].


Overall, the conversation moves from high-level endorsement of ethical AI to a nuanced, sometimes contested exploration of implementation challenges, ending on a hopeful note about shared responsibility and global cooperation.


Speakers

Speakers (from the provided list)


Tim Curtis – Regional Director for UNESCO for South Asia (UNESCO) – expertise in UNESCO regional programs and AI ethics. [S19]


Maria Grazia – Chief of the Executive Office of UNESCO’s Social and Human Sciences sector; moderator; microeconomist specializing in innovation and new technologies. [S12]


Dr. Tawfik Jelassi – Assistant Director-General for Communication and Information, UNESCO; expertise in communication, information, and knowledge societies. [S15]


Debjani Ghosh – Distinguished Fellow, NITI Aayog (India); former role with NASCOM; expertise in AI ecosystem development and policy in India. [S24]


Brando Benifei – Member of the European Parliament; involved in the EU AI Act and risk-based AI regulation. [S9]


Virginia Dignam – Professor, Director of the AI Policy Lab, Umeå University; member of UNESCO’s AI Ethics Experts Without Borders; expertise in AI policy and ethics. [S8]


Paula Goldman – Chief Ethical and Humane Use Officer, Salesforce; expertise in responsible AI deployment and ethical AI design. [S1]


Rita Soni – Audience member; works with a company delivering tech services in small-town India; focuses on digital-divide and inclusive AI development. [S3]


Audience – General audience participants, including questions from members such as Rajan (CEO & founder of a startup).


Additional speakers (not in the provided list)


Rajan – CEO and founder of a startup (Business Club TV); asked a question about “AI policy.”


Full session reportComprehensive analysis and detailed insights

1. Opening remarks (Tim Curtis) – Tim Curtis welcomed participants, thanked the Government of India for its collaboration, and reiterated UNESCO’s mandate to foster AI that is both innovative and ethically human-centred, with particular attention to the Global South. He introduced the panellists – Dr Tawfik Jelassi, Assistant Director-General for Communication and Information; Professor Virginia Dignam of Umeå University; Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce; Debjani Ghosh of NITI Aayog; and Brando Benifei, Member of the European Parliament – and handed the session to moderator Dr Maria Grazia [1-11].


2. Moderator’s framing (Maria Grazia) – Maria Grazia opened by questioning the wording of the title “Balancing Innovation and Ethics”, arguing that innovation need not be sacrificed for ethics [12-14]. Drawing on her micro-economics background, she linked productivity and welfare gains to new technologies [15-18]. She reminded the audience that UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states, rests on three non-negotiable pillars – human rights, human dignity and fundamental freedoms – and calls for these to guide AI development and deployment [20-25]. She then asked Dr Jelassi about the biggest gaps between high-level principles and concrete practice [35-37].


3. ADG’s response (Tawfik Jelassi) – Jelassi replied that there is no contradiction between innovation and ethics; the tension lies between innovation and regulation. He argued that ethically designed systems are more trustworthy and therefore more widely adopted, and stressed that AI must be “ethical-by-design”, built from the outset with human oversight, non-discrimination, cultural diversity and environmental sustainability [38-41].


4. Operationalising ethics (Debjani Ghosh) – Ghosh reframed the debate as a choice between using AI to eradicate cancer, hunger and poverty or to amplify conflict, noting that universal ethical alignment is impossible and that accountability must remain with humans [48-56]. She advocated embedding regulation throughout the AI lifecycle, with “flag-offs” at each design stage and sandbox testing before commercial release, thereby making ethics a default design feature [61-68].


5. Regulatory perspective (Brando Benifei) – Benifei explained that the EU AI Act adopts a risk-based classification, identifying high-risk sectors such as healthcare, justice and workforce management, and prohibiting particularly harmful uses including predictive policing, emotion-recognition and manipulative subliminal techniques. He argued that a transparent, risk-based regime builds the trust needed for AI adoption, especially in democratic societies where mistrust can stall deployment [80-87].


6. Education & epistemology (Virginia Dignam) – Dignam criticised the prevailing “hammer-and-nail” metaphor for AI innovation, pointing out that most development follows a Western Cartesian tradition that equates intelligence with individual cognition. She called for a broader toolbox that incorporates diverse epistemologies, citing the African Ubuntu philosophy (“we are, therefore I am”), and urged that engineers receive humanities and social-science training so they can ask why a problem matters, who benefits and who loses. She warned that AI has become an “empty signifier” treated as magic, and that interdisciplinary education is needed to restore precision to the discourse [106-138].


7. Industry practice (Paula Goldman, Salesforce) – Goldman described concrete steps taken at Salesforce: continuous monitoring of AI outputs, real-time escalation to human operators, sandbox testing of edge cases, and inclusive design that recognises diverse English accents, deaf speech patterns and other accessibility needs. She highlighted that such inclusive design not only meets ethical standards but also delivers higher accuracy and commercial success [140-158][220-227].


8. Global-South impact (Debjani Ghosh) – Ghosh presented the AI Impact Commons, an online repository now hosting impact stories from more than thirty countries, showcasing AI applications that address malnutrition, pharmaceutical safety, climate resilience and other pressing challenges. She noted that, contrary to the “luxury of the developed countries” narrative, developing nations are already leveraging AI but need additional resources to scale solutions. She highlighted capacity-building programmes such as Startup India that target Tier-2/3/4 cities, thereby democratising both access to AI and its design and development [166-174][291-298].


9. UNESCO grassroots example (Tawfik Jelassi) – Jelassi recounted a remote African village where UNESCO first installed community radios, later enabling mobile telephony, internet connectivity and early-warning flood systems. He argued that when information reaches the most marginalised, AI-enabled services can transform lives, provided there is accompanying capacity-building, advocacy and human-centred oversight [204-222].


10. Audience Q&A


  • Rajan (CEO, startup) asked Professor Dignam “What is AI policy?” – she defined it as the set of tools, skills and knowledge required to assess AI’s societal impact throughout its lifecycle, rather than merely the technical design of the technology [264-269].


  • Rita Soni raised the issue of developers working under unreliable power and connectivity. Ghosh responded that democratising AI design means extending capacity-building to smaller cities and underserved regions, noting that India’s startup ecosystem is already seeing faster growth in Tier-2/3/4 cities, indicating progress toward inclusive participation [279-298].


11. Closing – The panel concluded with a collective affirmation that innovation and ethics can be mutually reinforcing when ethical safeguards are embedded ex-ante, that inclusive and culturally diverse approaches improve fairness and commercial performance, that risk-based regulation such as the EU AI Act can balance innovation with protection of fundamental rights, and that global multilateral cooperation, capacity-building and shared impact repositories are essential to ensure AI serves humanity worldwide.


Session transcriptComplete transcript of the session
Tim Curtis

This afternoon to this UNESCO sponsored event, my name is Tim Curtis, I’m the Regional Director for UNESCO for South Asia and very happy to have you all for the event today, Humanity in the Loop, Balancing Innovation and Ethics in the Age of AI. Of course we’re grateful to the Government of India for its collaboration on this session because we at UNESCO believe, which we at UNESCO believe goes to the heart of our engagement with the ethics of artificial intelligence and namely how to ensure an ethical and human AI centred deployment whilst also encouraging the development of artificial intelligence and innovation in a technology that can offer so many benefits to humanity. and including and in particular to the global south.

So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assistant Director General for Communication and Information and who’s really been a pivotal figure in UNESCO’s work on AI ethics. Professor Virginia Dignam, who is a Director of the AI Policy Lab at Umeå University and she’s also a member of UNESCO’s AI Ethics Experts Without Borders and has been supporting UNESCO’s readiness assessment methodology in multiple countries. Also privileged to have Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, who are a member of UNESCO’s Business Council and she has really been leading by example in the private sector’s responsible AI ethics. Debjani Ghosh, a distinguished fellow Niti Aiyog who needs no introduction here in India, a household name in India for her role in building and leading India’s AI ecosystem.

Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who will share his insights on the EU AI Act and how they have been able to navigate balancing innovation and ethics. And finally, of course, I’m a moderator Dr Maria Grazia from the Chief of the Executive Office of UNESCO’s Social and Human Sciences sector. Please, Maria Grazia, over to you.

Maria Grazia

Hello, good afternoon. So we’ll try to have this session very dynamic because it’s after lunch, it’s Friday, over five days, very interesting, a long week. So let me start by challenging the very title of this meeting, that is Balancing Innovation and Ethics in the Age of AI. Now, nobody’s first. effect. So I’m a microeconomician, which is a very complicated word, which looks like a rude word, but it’s not. It’s mathematics applied to economics and especially applied to understanding the dynamics of innovation and new technologies. Why I’m saying that, because of course the question of innovation, what drives innovation, how can we get more innovation, is something that we always ask by the time you study what drives productivity growth, what drives welfare and well -being.

And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. And the position of UNESCO has been very clear. The position is, this is not true. So what UNESCO has, actually the member states that have adopted UNESCO recommendation on the ethics of artificial intelligence already in 2021, which means that you all countries, including India, were discussing these issues already since 2019 to get to an agreement. is actually what it means and how can we put technologies at the service of humanities and not let anything that is technologically feasible go if that technological feasibility actually hurts people, hurts humanity. And so for us at UNESCO, ethics of AI means something very concrete.

It means AI, technologies, and here I would like to invite you to think that it’s technologies, it’s not one single element, it’s a lot of things, that actually abide by three simple things that too often we give for granted, whereas perhaps we want to think about it more, and these are human rights, human dignities, and fundamental freedoms. And if we are able to develop, deploy, and use technologies in a way that we abide to these three components, then for sure we do have technologies that serve humanity. And why? I’m challenging the very topic because too often we see… innovation, or actually the narrative that we use out there, that is used out there, puts innovation and ethics, or ethical AI, which actually means an AI that also throughout the life cycle is ethical, as trade -offs.

So if we innovate, it cannot be ethical because by the time it’s gone out, we don’t have the time to check on these things. Well, think of a parallel, and then we take it from there on the concrete dynamics of AI. But think, if you were to think about one sector that is very much regulated, perhaps what comes to mind is pharma, pharmaceutical. Now, to my knowledge, but that can be my ignorance, I have never seen one single study being able to prove that the regulation in that sector has actually hindered the innovativeness or actually the productivity or even the remuneration of the sector. So by the same token, and actually the pervasiveness of AI to some extent leads us to think to the pervasiveness of of the paracetamol, for instance, we use every day by the time we have an ad, like I think some of you this afternoon might have, and after listening to me, perhaps even more.

But, you know, it’s really the pervasiveness of technology that touches our life, each and every day in many ways. And this is what I think is important to discuss from different perspectives. And allow me to start with my ADG, ADG jealousy. And as I mentioned, from UNESCO, we give this global perspective, because the recommendation was adopted by 193 member states. Now, very often, what is very challenging is to go from principles to practice. That is, sometimes we know what we need to do, but then the question becomes, how do we translate it into practice? So, ADG jealousy, when do you see what are the biggest gaps that exist between going from principles and what instead is happening on the ground?

Dr. Tawfik Jelassi

Thank you, Maria Grazia. maybe before I briefly answer your question let me say that you used the word of innovation and ethics I don’t see personally an issue, a contradiction between the two, I see it more between innovation and regulation because say to be creative, innovative you should free up the mind of the people, you should not constrain them, you should not tie their hands I used to be chair of a telecom operator board and there of course telecom and mobile phones and access to private data of consumers, the issue of regulation is paramount but we don’t want regulation that hinders innovation, I think here so I don’t see ethics and innovation being in contradiction to the contrary, I think they reinforce each other how is that?

Because clearly if you integrate ethical reflection in the design of AI systems of course if you do that AI systems will be more respected more trustworthy, more used and therefore more broadly deployed across society so I see ethics and innovation really reinforcing each other and quite often at UNESCO we say AI systems have to be ethical by design it should be done ex ante not ex post not when we see mistakes and hazards and risks and harmful impact of AI we say wait a minute let’s go back to see what went wrong in those models in the data sets, are there some biases etc so I think it has to be done from the very early stage and therefore innovation has to be human centric and has to be contextualized, there is no one size fits all, we know that what you can provide is an overarching framework so it’s a broad set of guidelines and principles as you said Maria Grazia and this is what the UNESCO recommendation on the ethics of AI is about You know that this recommendation has been so far the only global recommendation of its kind.

It was adopted back in 2021 by 193 member states of UNESCO, and it calls for human oversight, non -discrimination, respect for cultural diversity, respect for environmental sustainability. These are the principles that need to be translated into action and that need to be operationalized within a certain context.

Maria Grazia

Thank you very much, Elie Dji. Let’s actually go to Debjani, because I would like to go further into this operationalization question. So, from your work at NITI IOC, and also your experience with NASCOM, so what are the mechanisms that can really help embed the ethical reflection into what is the everyday life of both companies and sectors?

Debjani Ghosh

Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the topic, if I may, for a second, right? because I don’t think the choice is between innovation and ethics. I really don’t. I think the choice is between do we use technology to ensure that everyone in the world is cancer -free, everyone in the world lives with dignity, everyone in the world has enough to eat, or do we use the technology to make the world a much bigger conflict zone, develop the next atom bomb, and worse. So I think the choice is that. And therefore, the biggest challenge we have, and I hate applying the word, the label of ethics to technology, because I think the biggest challenge we have is can we, all the wisdom in this room, can we say that we will be successful in aligning every single human on this planet to the same ethical values?

The answer is no. No. we’re not going to be able to do that. And we know we’re not going to be able to do that. So as long as we humans don’t align to the same ethical values, you will always have good actors and you will always have bad actors, you know that technology is going to be used in ways that are non -ethical. So the accountability, you’ve talked about humanity in the loop, the accountability comes back to us. So I think it’s very important to sort of understand that because in all our dialogues on technology, we somehow delegate the accountability to technology. I don’t think we can as yet. Maybe in another 10 years when cognitive reasoning becomes a thing, maybe then, but not as yet because for somebody who actually builds codes and builds agents, I know they’re not that intelligent as yet.

So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking about how does industry ensure? I mean, one of the things I’m very clear about that regulation is usually an afterthought. You develop the technology and then you say, okay, how do we now regulate it to ensure that it’s used right? And I think that has to fundamentally change. Oversight has to be built into the entire development process from design to commercialization. And it has to be built with the right flag -offs at every part of the design and development process. If you do that and you’re able to, you know, red tape the product that you are developing at every single stage to certain standards that have been developed, you are going to develop something that, and then hopefully after the entire development phase, there’s also a sandbox where you test out the impact.

You will get to a stage where ethics becomes by design versus an afterthought. And I think that’s what we have to move towards.

Maria Grazia

Thank you. I’d like to a bit change the order of the speakers because you brought in the argument of the regulators and you have one next to you that I’m going to refer to. And how do you see this relationship? Because we know fundamentally the regulation that has been pushed in Europe is a risk base. So what was the logic and how this relates to what she was discussing as the human oversight or even the redress of mechanisms that we might want to put in place in order to have AI that is ethical?

Brando Benifei

Well, first of all, excuse me for the voice, but that’s it. Exactly, but thanks to technology, you can hear me anyway. So I think that I… I can also adhere to the point that innovation and ethics are not one against the other. in fact this summit that is concentrating on impact on action, on diffusion, is not separate from keeping the track on on reflection, on safety on how to protect human rights how to make AI human centric, the things are interwined, the point is how do we regulate effectively and how we find a good balance, but I want to bring maybe a controversial point to the table because I have my strong conviction on this we have chosen globally, including in Europe, that has been often the forefront of regulating in one of those rooms now, I was with her in another panel there was Anu Bradford professor of Columbia University that has written the book The Brussels Effect so in fact EU has often opened the way for many regulatory pathways I mean even Europe has chosen when looking at the social media to actually not regulate we have let the social media diffuse without regulation and today we are discussing about limits for minors we heard about that also in the inaugural session we are discussing about misinformation and labelling of deepfakes even Prime Minister Modi talked about that in the inaugural session but we are doing it all now after a lot of things have happened and my point, that’s my opinion we have already unmodifiable consequences so I think that when we talk about when we should regulate we should regulate and we should regulate and we should regulate and we should regulate and we should regulate and we should regulate if we should let the innovation flow and act only ex post.

Sometimes we might be wrong and risk unchangeable effects. So we need to build a balance that doesn’t hinder innovation, but also identifies human rights challenges. The AI Act tried to build a risk -based approach, identifying areas where we need AI to be overseen, workforce use of AI, healthcare use of AI, administration of justice use of AI. We want to be sure when we deal with that that data used for training is quality data, cybersecurity is sufficient, the governance of the data is solid, and there is human control. These are examples of what we have identified. Everything. And in fact, we even chose to prohibit. a few use cases, for example, predictive policing, for example, emotional recognition in workplaces and in study places, manipulative subliminal techniques.

I don’t think it’s a taboo to choose that some use cases of AI, we don’t want them in our society, and we just keep them out. So I think this approach based on the risk, you can look if you like it this way, if you want to modify, but it’s an interesting perspective, because you can choose what you think is in need of a certain regulation, and you can also promote transparency, which I think is crucial to build trust. Without trust, especially in democratic contexts, it’s impossible to accelerate adoption of AI, which is still a big challenge from both the global north and the global south. The numbers tell us that a lot of companies, or public administrations that could benefit from an ethical and correct use of AI, they are not using it because they don’t know what could

Maria Grazia

You put forward a very important point, Brando, that is like perhaps we might not be able or we might not want to decide what the technology should do for us. But for sure we might want to discuss and agree on what we do not want the technology to do for us because these are unacceptable uses of deployment. And in this case, this also highlights the importance of awareness, of the centrality of people, of having this human -centered approach. And here I would like to invite Virginia into the conversation because of course you, as an educator, as part of this beautiful world of educators, as a professor, you have this constant contact and the ability to interact and nurture the humankind.

So what do we have to do to avoid that people are just consumers or, you know, are possibly exposed to it instead of stealing the technology to work where we want to go?

Virginia Dignam

Sure. Thank you very much. Thank you for inviting me to be here. Again, like all my previous colleagues, I want to go back to the title. And I’m not going to talk about the balancing part. I’m just going to claim and to be controversial and to wake up all. We are doing both the innovation as the ethics and regulation side all wrong. We are doing it not in the way that it needs to be done. On the innovation side, we are doing it wrong because we are somehow understanding innovation as the capacity of using this hammer that we found out a couple of years ago of Gen AI or whatever. And now we want to use the hammer to nail any nail that we find out.

Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a sentence that has come with me and is the main thing I’m taking from this summit today. In a couple of sessions ago where I spoke, someone was saying, most people developing AI never experienced power cuts, never experienced broken roads. I would like to go further. AI, and I have been working in AI for 40 years, all the different types of AI that existed before, has been developed extremely on the Western tradition, the Cartesian tradition. We think, therefore we are. I think, therefore I am. First it is individualistic, and then equates intelligence with cognition. Human intelligence is much more than cognition.

If you would think about AI developed for instance in the African Ubuntu tradition, it says, we are, therefore I am. It would be a completely different type of AI. So we do need to challenge ourselves not to go with this hammer that is there already and try to find the nails and call that innovation. It is not innovation. It’s just running around like chickens without heads and see if one of those hammers works. So that’s one. On the side of ethics and the regulation, we are also assuming there are two things that usually come with the idea, and especially in this type of combination, that ethics is this kind of finger that points, thou shalt behave, thou shalt be good, and that regulation is about prohibiting you to do things.

Neither ethics is the finger, nor regulation is necessarily only about prohibitions. Moreover, regulation like AI, like the hammer, like the telephone, is not about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. artifact that we built we built regulation and we can apply to regulation and to ethics the application of ethics exactly the same type of principles that we apply to technology let’s experiment, let’s try let’s verify, let’s evaluate let’s see what’s there and not have this idea of the finger or the loss written in stone which stays there once and forever so that’s going back and now very quickly on your answer because I don’t want to take much time I think that education needs exactly to start by this point technology alone is not enough so we really need to up our education of the engineers, the computer scientists the data scientists on the humanity side we know as engineers we know very well how to solve a problem, we never ask ourselves why is this a problem who has this problem, what are the alternatives to my solution who gains, who loses, what is gained, what is lost this is humanity We need to somehow bring that together in the engineering case and in the humanities and social science case.

We need them, because I’m an engineer, to help us understand that we need to be much more precise in what we are talking about. AI at this moment is actually an empty signifier. It doesn’t mean nothing. Everything is AI. Nothing is AI. All kinds of things are AI. The applications are AI. The sectors are AI. The technology is AI. The research, everything is AI. And we cannot just go around with this word which actually means magic. In most politicians’ talks, it means magic. And we want to regulate magic. Okay, good luck. So we need the humanities, the social science, to really help us. As being precise about what are we doing. So this is the education we need.

Maria Grazia

fantastic you couldn’t have put it much easier to me to then ask Paola how are we doing that in companies because this is very easy to say we need to translate the principles the values in concrete models of that actually work work for a company, work to deliver results and work for people

Paula Goldman

yes indeed, well first of all thank you for that and I mean we were just talking about how this is our last speaking panel of the week and that was a fiery way of drawing things together, I really appreciate it, kind of an energy boost so yeah, I think the answer is actually much more practical and much less abstract than one might imagine and so I’ll just tell you a little bit about my experience I spend my days at Salesforce both testing our products and making sure that our AI has features baked into it so that our customers There’s no, I can observe what’s going on and know how to tweak the controls and understand, for example, when they should set for an AI agent to escalate to a human or a human to escalate back to AI and so on.

And when we do this, it’s not like we think we at Salesforce have all the answers because clearly we don’t and we serve a variety of industries and all over the world and so on. But everyone, all of our customers are basically asking the same questions, right? They’re asking, how do I know what kind of results I’m getting? How can I tell if something goes wrong? What are my options if something goes wrong? What part of AI ethics is your responsibility and what part is mine, right? And these questions don’t necessarily have the most mature answers because we’re in the early innings. of AI agents and a lot more work. to do. But actually, these are the right questions to be asking and also allows for some flexibility and some cultural or industry specificity for people to find the right answers to the questions.

So that would be part one of my answer. It’s actually very, very practical. To adopt AI, companies and organizations need to be able to trust that it’s going to work. They don’t want to be embarrassed by it, right? And they’re not going to be able to scale it if it doesn’t work. So that’s number one. The second thing is also increasingly what we’re finding when we work with companies on this is that the most successful companies at Scaling AI put the people at the center of the transformation. They work with not just top down, like you shall use this application. They give people a chance to sort of have a voice around what is actually working.

What is actually most useful to them in the day -to -day work? Where is AI going to actually help them and where is it kind of useless? right and it’s that kind of understanding of how work actually gets done what actual processes are going to benefit from that kind of application that I think is really important and allows people to sort of stay at the center of this large -scale transformation that we’re part of

Maria Grazia

that might happen or should happen in the context of making AI ethical by design?

Debjani Ghosh

Well, in my current role in ETIO, which is the think tank for government of India, we’re looking at what are the unlocks for technology, including AI, to ensure that we can use technology to solve for some of the biggest problems, right? Now, what Professor Virginia said about AI as a hammer, I think that’s a luxury of the developed countries, and I do agree with you when it comes to developed countries. But when you come to developing countries where you don’t have a lot of resources, you cannot afford to use the technology that takes a lot of deep investment to sort of do things where you’re not sure. You’re not sure of the ROIs. And one of the things, examples I want to give is as part of this summit, there were seven working groups that were set up looking at different problems.

I chaired one of the working groups on economic development and social good, which was all about impact and how do you scale impact, right? And we had around 50 countries participating. Now, one of the things that came out of that working group was, which is one of the outcomes of this summit, is the creation of AI Impact Commons globally, and it’s online. You guys can look it up, aiimpactcommons .global, which has impact stories from more than 30 countries, and counting, and it’s growing every day, with learnings on what kind of problems can be solved and how do you scale it. And by I said it’s a luxury of developed countries is because when you look at those impact stories, and most of them are from developing countries, and you’ll be amazed with the kind of problems they’re solving, from malnutrition to pharma, you know, to suicides, how do you lower pharma suicides by using technology to improve yield.

Thank you. ensure that they don’t suffer from climate changes and shocks. I mean, the problems are so inspiring. So I think it won’t be fair to say that we don’t know what problems we are solving today, and I will absolutely stand for that. And I think it’s – I’ll go back to what Paula said. I’m not sure if industry today is really putting human at the center of the loop, but I think they need to. They absolutely need to. I do, because as we develop technology, for example, the end goal right now of – seems like the end goal of AI, all the big companies are talking about, is AGI. Now, when you look at what does AGI mean, it’s about control.

Why do we want to build something to control everyone? Why don’t we want to build something that is going to augment lives? And if we could change the narrative, then I would say, yes, humans are at the center. Right now, I think we still have – we still have a lot of work to do to bring humans back into the center of the loop. And it’s something I think we have to realize and industry has to realize. that that is the only way you can build sustainable businesses. And that’s how you sort of build your staying power. So it’s going to be very important to do.

Maria Grazia

Absolutely. And it’s about having these different entities around the table, but also having different governments and having this multilateral setting talk to each other to have regulation or more generally, because at the end of the day, we talk a lot about regulation, but regulations are part of the policy framework that one could put in place. So actually, let’s go to Brando, because I was seeing he was kind of calling me with his eyes by the time we were talking, and I’m sure he wants to add on the multilateral setting. Please, over to you, Brando. Perhaps you were not calling me, but you’ve been called in. Never the less.

Brando Benifei

Well, I think that it’s very important that we use occasions like this, this summit, to… to advance a global cooperation framework. And for sure… it’s also a part of the mission of UNESCO to unite different cultures and approaches to what we are talking about. And you explained it earlier, the longstanding work of the organization. But I think that we need to face the reality that there are issues where global cooperation will be crucial and that it’s still not sufficient. Let’s think of military use of AI or the existential risks of losing control of very powerful AI models. This is something that is part of a controversial debate, we would say. But I wouldn’t dismiss renowned scientists that sustain that we are.

in a context where the lack of globally adopted rules are putting us in very significant danger. And this is also part of the idea of balancing innovation and ethics. Because for sure we need domestic rules to foster the best opportunities out of the various use cases of AI. In these days I met many companies that were working on very practical, extremely useful AI use cases to ameliorate our life. To ameliorate. To ameliorate societal good. But this cannot be left in the hands of just the… judgment of private sector companies that have a specific objective, profit for their owners or shareholders, it’s not societal good they might want to add that on top but that’s not their objective, it’s natural, so we need to have frameworks in place on what is our daily impact with AI and we need to build common standards the more broadly adopted standards we have globally the best will be to reach results but we also need a step further that is global cooperation on those issues where we cannot actually do very much domestically they are global issues and I think that with an increased geopolitical tension soon the use of AI for peace will be quite an important topic on which the international community has to find a way to take quick steps forward I hope that our leaders will deal with that

Maria Grazia

I can’t agree more with the need to coordinate and have an approach that is global and actually allow me the prerogative of the moderator to call my ADG Tophie I will take the consequences of that but what I would like to ask you is what it means to have people at the center and let’s remember that in your case, given the work you lead on the communication and information sector what is the role of the information Virginia was hinting at that before in terms of awareness could you please share a bit of those insights

Dr. Tawfik Jelassi

Thank you Maria Garcia let me pick it up where Brando left it, he said AI for peace maybe some in the room know why UNESCO was created back in 1945 80 years ago almost to the day the mission of UNESCO was and has been to build peace in the minds of men and women how? through education culture, sciences, communication and information everything happens in the mindset of the people today of course we want AI to be a force for good but it could be also a force for hazards, for harm for risk I tend to say technology is neutral it depends what humans make out of it it could be a force for good it could be a force for you mentioned wars or unwanted things so yes humanity in the loop that’s fundamental I always ask myself and that’s my team at UNESCO I say if whatever we do in the field if that transforms lives then we are spot on if you make the beneficiaries of our educational program whatever if you can make them more successful through what you offer them then that’s impact.

Where is the impact? AI can transform lives, yes. And you mentioned to us some examples. It can help cure cancer, as you said, provide food for the needy people, and so on and so forth. We want that type of AI. And AI does not stand for artificial intelligence. AI stands for all -inclusive. That’s AI as well. So if you have that perspective to things, if you really put humanity in the loop, at the center, not only in the loop, in the center, and allow me one minute to share with you, I have been at UNESCO for five years. My most memorable day happened last week in a tiny village in remote southern Africa. A village in which people had no access to radio, no TV, no mobile telephony, no internet, nothing.

They always felt we were second -class citizens in this country. Imagine that you don’t have access to your own internet. Do you have that information? you don’t know what’s happening around you you cannot call your relatives living in other cities this was the case of 15 small communities what UNESCO did, it provided first community radios, set up a tower with transmission equipment so through the radio people have information know what’s happening and when we did that, telecom operators came in to plug in their equipment to provide mobile telephony, and then it became internet connectivity, and then UNESCO put in place early warning systems, because these areas were very much prone to floodings, and whenever that happened it wiped out the cattle the livelihood of the people, etc that’s transforming the lives of the people, AI can contribute in a huge way to that extent and I think if we put that at the center, then of course it has to be ethical, it has to be human centered, it has to be accountable, transparent, all the principles that we talked about, and then comes the issue of …

advocacy, capacity development because more informed policy makers will go this route but if we don’t bring up awareness if we don’t do the advocacy and the capacity building and the training then of course we can see that some companies or some people going for the buck for the profit out of this technology not the social benefit not transforming lives

Maria Grazia

thanks very much all over to you because the company is at the end of the speech so over to the company and really how you see also this fact of including the other stakeholders in what you do and how that can transform and help you deliver on AI that is added by the company

Paula Goldman

well thank you for saying that and I actually think that it becomes more and more obvious that that’s actually the only way to scale the technology um um And, you know, but just think about, think about if you’re developing a technology that’s meant to serve many different markets and many different populations, that you need to know, for example, like we have in our AI agent, we have a voice capability. We need to know that that voice capability, even if we’re just talking about English, forget about other languages for a second. We’re just talking about English. It needs to work on different vernaculars of English, different accents, etc. I work a lot on product accessibility, right?

It needs to understand a deaf accent, for example. And so the most inclusively designed technology is going to be the one that’s most successful. It’s going to increase accuracy rates and so on. I also think this is to that end it’s actually a very very exciting time to be able to use AI for inclusion and so I mentioned for example product accessibility one of the things that to me that’s most hopeful and most exciting about this time is that like we’re starting to see AI agents that correct in real time we’re working my team is working on this at Salesforce correct in real time code that is not accessible or correct in real time a browser extension so that if you’re like on your phone and something comes up and maybe a common problem is you’re trying to zoom out or in and it breaks it will correct it in real time and these are the this is this kind of technology is the difference between someone that’s able to use some software to actually get their job done or someone that’s excluded from getting their job done and so again I guess I guess the point that I’m trying to make is the most inclusively designed technology is going to be the most commercially successful and also this is an incredibly exciting time to be doing

Maria Grazia

I’m really happy to hear from the voice of the industry that the more, so those that include are actually not making a favor to those that get included, but actually the AI, the systems get superior. And so that is something I think that’s another comment of a common legend out there that says, no, you know, it’s costly and perhaps then, you know, the profit is not there. What we are hearing from the voices of the companies is really like, well, no, because it’s a superior product, it’s a better product, it performs better. Last but not least, back to our Virginia. Especially here, I would like to listen from you about what you think is the role of a specific component of human capital, that is the skill.

And we have heard throughout this week the importance of upskilling, reskilling. And is that really the solution?

Virginia Dignam

thank you very much firstly going back to if I made the impression that hammers are not useful it’s not the case there are many useful hammers my point is more like we need a toolbox we don’t need only hammers and even outside of the western world we are too much focused on hammers maybe the skills yes we really need to focus on skills we need to focus on our own capabilities on our lived experience and so on someone talked about AGI and indeed at this moment the AGI concept is about power is about providing power to those companies that claim they will build it how are they building it is what I call the play -doh approach they are putting all the data of the world with all the capacities of the world creating a huge ball of play -doh if anyone who played with We played out before, you know, that after you play, there is no color, there is no shape, there is nothing anymore.

It’s just a thing. And then, of course, that thing might do, but no one knows what’s inside, what came in, what came out, and so on. We need to go much more broader in understanding how this AGI is. What fundamentally AGI means, a system that is more intelligent than us, that can solve problems that we cannot. We already have AGI. We always had AGI. It’s called collective intelligence. The moment that we work together, we can do more than each one of us. If we are using the AI technology that we are developing to support this collaboration together, to develop the different skills, to integrate all our different capabilities, our different differences, our different experiences, our different capabilities, our different abilities, our different abilities, the different tools that we have developed.

then we get a much broader bouquet, not anymore a bowl of Play -Doh without color, but a huge bouquet of flowers of all those colors and so on. So AGI is about, and we cannot let the big companies run away with the concept of AGI by the idea that they are going to create God which is going to solve our problems. AGI is about us. It’s about putting all us together because our collective intelligence is really what, at the end of the day, is going to solve or to support us solving our problems. It’s just one more thing, and I think that’s also part of the skills. Technology, and there I disagree with you, is not neutral.

All technology embeds and encompasses our choices, our options, our data. All of that is part of. We have to understand technology as a non -neutral. artifact, and take those capabilities and also embrace the different perspectives and the different colors of this. But again, altogether, that’s the only way forward, is not giving up and hoping that AI is going to solve whatever complex problems we have. Now it’s really embracing and enforcing collective intelligence. That is AGI.

Maria Grazia

Excellent. Now, collective intelligence. Now we are going to have a collective set of questions, just a couple, because the time doesn’t allow for more. So, please, by the time you want to intervene, be absolutely short, say your name, say whom you want to ask the question to, and the question without doing the history of humankind before shopping with a person. So, I have to say, I spotted that surface, and there was a lady on this side. Now I think she got shy, and she just put the… So, let’s start by that gentleman. No, it’s the gentleman behind you, I’m sorry. is there I can do everything from moderating to giving you the part we are proactive and problem solving let’s go your name is

Audience

hello everyone myself Rajan I am from business club TV and I am the CEO and the founder of the startup so I have a very basic questions for professor Virginia Dignam yes so professor I have a question for you what is AI policy

Virginia Dignam

Wow, okay, how many hours? Okay, very shortly, AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI. Not the technology, not the designing of the technology, but really addressing the impact of this technology from the whole loop and all development from the beginning, asking ourselves, why are we using AI? Is that the best problem that we have? To the way we are developing it, to the way that we are evaluating. And addressing the impact of it.

Maria Grazia

No, I’m sorry, because we have to give it, let’s be inclusive, let’s allow the other. to speak as well. Please, that lady, yes, exactly, the one with the hand raised. It’s just down here, three rows ahead. I’m going to be gender equal, so one -on -one. I’m not going to have the men speak because typically you’re the fastest to raise your hand, the women, we are more sharp. Go ahead.

Rita Soni

I love that. Thank you for that. Hi, my name is Rita Soni. I don’t know who should answer this question, but at the beginning of this panel, I heard someone say those that are developing AI and designing it probably have never experienced a power cut or potholes in the road. I thought that there would be more discussion about who is actually involved in the humans in the loop. Dave Donnie, you know me. So I have to ask this question about the people that are actually developing it and whether we’re thinking about responsibly employing them. Right now, we know that there’s overhauls of half a million people in the world. And so, I’m going to ask you to think about that.

that we consider impact workers. They’ve typically been excluded, but now they are. So how do we support this as a movement of getting those that have experienced power cuts to help design and develop it? This is a development -related question.

Maria Grazia

Who wants to attack it? Because we are over. That’s the last question, and then we will have to say thank you and continue the conversation in parallel.

Debjani Ghosh

Yeah, I fully, I mean, you know, if you’re talking about have developers suffered, to develop the technology, power cuts, anyone who’s working out of Bangalore or any Indian cities, yes, they have. They’ve definitely suffered in the development. Now, I think, Rita, the point you were making is how do we make it more inclusive? How do we bring in? And I think that’s something that goes back to the perennial question, is how do you ensure that you democratize not just access to technology, but you also democratize design and creation of the technology, right? And it’s not just gender. It’s also how do you diffuse it down to smaller cities, so people who are actually facing the problems in smaller cities.

And I think at least in India we are doing that through our initiators like Startup India, etc., which is more focused today on building capabilities in Tier 2, Tier 3 cities, not users, not just for adoption, but actually for design and development. So there’s a lot of focus, and I’m sure there are founders here who have come from the smallest of cities in India. And the best part is when we track the numbers, the growth of startups and founders is higher in the Tier 2, Tier 3, Tier 4 cities than in Tier 1 cities. So that tells us we’re doing something right.

Maria Grazia

enjoyed at least like half of as much I have enjoyed this panel. Please join me in thanking the journey from your to be and we’re going to do a large show so please stand up we’re going to do a selfie with all of you in the back come here stand like this so we’re all together this is our collective intelligence thank you thank you very much thank you thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Tim Curtis welcomed participants and thanked the Government of India for its collaboration”

The knowledge base records Tim Curtis, UNESCO Regional Director for South Asia, expressing gratitude to the Government of India during the event [S19].

Confirmedhigh

“Tim Curtis introduced Dr Tawfik Jelassi as Assistant Director‑General for Communication and Information”

Tawfik Jelassi is listed as Assistant Director General for Communication and Information at UNESCO [S107].

Confirmedhigh

“UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence was adopted by 193 member states and rests on three non‑negotiable pillars – human rights, human dignity and fundamental freedoms”

The recommendation was indeed adopted by 193 member states and is grounded in principles of human rights, dignity and fundamental freedoms [S29].

Confirmedmedium

“UNESCO’s position is that regulatory frameworks do not hinder innovation”

UNESCO has explicitly stated that frameworks and constraints do not impede innovation dynamics [S1].

External Sources (120)
S1
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Paula Goldman- Chief Ethical and Humane Use Officer at Salesforce
S2
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — -Paula Goldman: Area of expertise, role, and title not mentioned in the transcript
S3
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all the…
S4
https://app.faicon.ai/ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all the…
S6
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be…
S7
From Technical Safety to Societal Impact Rethinking AI Governanc — Thank you, Neha. Thank you, Eugena. Merve Hickok. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Tha…
S8
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Professor Virginia Dignam- (Same as Virginia Dignam, referenced with title)
S10
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Brado Benefai- (Appears to be the same person as Brando Benifei, mentioned in introduction) -Brando Benifei- Member of…
S11
Open Forum #72 European Parliament Delegation to the IGF &amp; the Youth IGF — – Brando Benifei: Member of European Parliament (mentioned but not in speakers list)
S12
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Maria Grazia- Chief of the Executive Office of UNESCO’s Social and Human Sciences sector, moderator, microeconomist spe…
S13
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who wi…
S14
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — – **Dr. Maria Grazia Grani** – CEO from the Social and Human Sciences Sector UNESCO (mentioned in introduction but appea…
S15
WSIS+20 High-Level Event 2025 Inaugural Session: Celebrating Two Decades and Achieving Future Milestones Together — ### UNESCO Assistant Director-General Tawfik Jelassi – **Tawfik Jelassi** – Role/Title: Assistant Director General for …
S16
Day 0 Event #119 Roam X Driving WSIS Implementation and Digital Cooperation — – **Tawfik Jelassi** – Assistant Director General of UNESCO for Communication and Information, delivered keynote remarks…
S17
DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023 — Dr. Tawfik Jelassi, Assistant Director-General for Communication and Information Sector, UNESCO
S18
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assis…
S19
Ethical AI_ Keeping Humanity in the Loop While Innovating — 339 words | 73 words per minute | Duration: 276 secondss This afternoon to this UNESCO sponsored event, my name is Tim …
S20
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Dr. Tawfiq Jilasi- Assistant Director General for Communication and Information (mentioned by Tim Curtis in introductio…
S21
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S22
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S23
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S24
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Debjani Ghosh- Distinguished Fellow at NITI Aayog, former role with NASCOM
S25
Panel Discussion: 01 — -Debjani Ghosh- Distinguished Fellow, Niti Aayog (role: moderating the ministerial conversation)
S26
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Tawfik Jelassi Merci, madame. Je crois que presque tout a déjà été dit et je suis d’accord avec mon ami et collègue Thom…
S27
9821st meeting — For Mozambique, it is essential that the international community establishes norms and standards that promote trust and …
S28
Scaling Innovation Building a Robust AI Startup Ecosystem — This comment is insightful because it explicitly addresses the geographic democratization of innovation in India, acknow…
S29
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ag…
S30
WSIS Action Line C10: The Future of the Ethical Dimensions of the Information Society — Dr. Mariagrazia Squicciarini:Good morning, everybody. Welcome to this session that actually tries to discuss the issue t…
S31
Ethics and AI | Part 6 — A significant focus of the Act is placed on transparency. It mandates that users be informed when they are interacting w…
S32
Indias AI Leap Policy to Practice with AIP2 — Brando explains that the EU AI Act takes a targeted approach by specifically regulating high-risk AI applications while …
S33
AI diplomacy — We are, in essence, searching for a common language to discuss AI ethics, safety, and security. We can see the early res…
S34
DC-DNSI: Beyond Borders – NIS2’s Impact on Global South — Elise Racine: I’m Elise Racine. I’m a doctoral candidate at the University of Oxford. I study artificial intelligence, i…
S35
1 Introduction — At present, the need to respond to complex social, environmental and economic challenges keeps increasing. Profound geop…
S36
Ethics and AI | Part 1 — The logic of innovation is not substantially altered by ethics, morals, equity, and other societal considerations. Altho…
S38
WS #187 Bridging Internet AI Governance From Theory to Practice — Governance Implementation Challenges Legal and regulatory | Infrastructure Renata emphasizes the practical challenge o…
S39
WS #98 Towards a global, risk-adaptive AI governance framework — Speakers highlighted ongoing efforts to create risk-based governance frameworks, such as the OECD’s AI classification sy…
S40
Open Forum #30 High Level Review of AI Governance Including the Discussion — – Lucia Russo- Melinda Claybaugh- Ansgar Koene Legal and regulatory | Development Moving from Principles to Practice …
S41
Open Forum #15 Building Bridges for WSIS Plus a Multistakeholder Dialogue — Despite consensus on the importance of Global South participation, concrete mechanisms for addressing participation gaps…
S42
Closing Ceremony — Development | Human rights This argument emphasizes the importance of diverse representation in digital governance disc…
S43
The Challenges of Data Governance in a Multilateral World — Collaboration and participation from the Global South were emphasised as crucial for developing a collaborative approach…
S44
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S45
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S46
WSIS Plus 20 Review: UN General Assembly High-Level Meeting – Comprehensive Summary — Cybercrime remains a serious concern that requires concerted collective solutions. Furthermore, there is a need to estab…
S47
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S48
360° on AI Regulations — Balancing national security interests with maintaining trading partnerships is a crucial aspect of AI regulation. The po…
S49
Ethics and AI | Part 5 — Recognizing that activities within the lifecycle of artificial intelligence systems may offer unprecedented opportunitie…
S50
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Both speakers strongly advocate that ethics must be integrated from the very start of technology development rather than…
S51
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Ethics must be foundational and cross-cutting pillar, embedded across entire technology lifecycle including when moving …
S52
Secure Finance Risk-Based AI Policy for the Banking Sector — Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life …
S53
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S54
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S55
How to make AI governance fit for purpose? — – Jennifer Bachus- Anne Bouverot- Shan Zhongde- Chuen Hong Lew Given that AI technologies are inherently global, effect…
S56
Judiciary engagement — Legal and Regulatory Framework Needs Legal and regulatory | Development Need for global cooperation but sovereignty st…
S57
WS #270 Understanding digital exclusion in AI era — The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technolo…
S58
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S59
Ministerial Roundtable — The discussion highlighted the importance of carefully understanding the opportunities presented by emerging technologie…
S60
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S61
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Legal and regulatory | Development | Human rights Rather than viewing regulation as hampering innovation, new regulator…
S62
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — Celestine Alves:Yes. Thank you, Denise. Well, I will speak from perspective from a lawyer, of course, and from a Brazili…
S63
DC-Sustainability Data, Access &amp; Transparency: A Trifecta for Sustainable News | IGF 2023 — In conclusion, the analysis underscores the need to consider ethics, data protection, diversity, and effective regulatio…
S64
Capacity Building in Digital Health — Explanation:It’s unexpected to see regulatory leaders advocating for flexibility and innovation rather than rigid compli…
S65
WS #362 Incorporating Human Rights in AI Risk Management — – Operational frameworks and processes are necessary to translate principles into practice
S66
AI Governance Dialogue: Steering the future of AI — Martin used a maritime metaphor to explain current governance limitations, stating that while frameworks like the UN’s P…
S67
Pre 9: Discussion on the outcomes of the Global Multistakeholder High Level Conference on Governance of Web 4.0 and Virtual Worlds — However, the complexity of implementation challenges, from governance sandbox operation to meaningful stakeholder partic…
S68
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Advocates for a harmonised approach to regulation and policy-making believe that this method can yield positive outcomes…
S69
WS #110 AI Innovation Responsible Development Ethical Imperatives — Innovation must be guided by law, ethics, and critical reflection rather than being an end in itself Unexpectedly, spea…
S70
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — This comment dismantles one of the most persistent false narratives in tech policy – that regulation inherently stifles …
S71
Ethical AI_ Keeping Humanity in the Loop While Innovating — Agreed with:Dr. Tawfik Jelassi, Debjani Ghosh — Innovation and ethics are not contradictory but mutually reinforcing Ag…
S72
Main Topic 3 – Innovation and ethical implication  — She advocates for these initiatives to bolster and reference each other rather than consolidate, enabling stakeholders a…
S73
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Ethics must be foundational and embedded from the beginning of technology development, not treated as an afterthought
S74
Ethical AI_ Keeping Humanity in the Loop While Innovating — Absolutely. And it’s about having these different entities around the table, but also having different governments and h…
S75
Ethical AI_ Keeping Humanity in the Loop While Innovating — The discussion revealed remarkable consensus across stakeholder groups on key principles: innovation and ethics are comp…
S76
WS #187 Bridging Internet AI Governance From Theory to Practice — Governance Implementation Challenges Legal and regulatory | Infrastructure Renata emphasizes the practical challenge o…
S77
Open Forum #30 High Level Review of AI Governance Including the Discussion — – Lucia Russo- Melinda Claybaugh- Ansgar Koene Legal and regulatory | Development Moving from Principles to Practice …
S78
WS #98 Towards a global, risk-adaptive AI governance framework — Speakers highlighted ongoing efforts to create risk-based governance frameworks, such as the OECD’s AI classification sy…
S79
High-level AI Standards panel — 1. **Translate**: Converting principles into practical implementation for real-world governance
S80
Closing plenary: multistakeholderism for the governance of the digital world — At the heart of this effort lies the principle of empowering stakeholders, who should have a fair chance to contribute, …
S81
Open Forum #15 Building Bridges for WSIS Plus a Multistakeholder Dialogue — Despite consensus on the importance of Global South participation, concrete mechanisms for addressing participation gaps…
S82
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — Indonesia: Excellencies, Distinguished Delegates. We are meeting at a pivotal crossroad, a watershed moment, and a def…
S83
Webinar session — Three speakers emphasized the importance of inclusive participation, particularly highlighting the meaningful involvemen…
S84
Closing Ceremony — Barriers to Participation and Representation Development | Human rights This argument emphasizes the importance of div…
S85
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S86
Artificial intelligence — One way of addressing concerns over AI ethics could be to combine ethical training for technologists (encouraging them t…
S87
WSIS Plus 20 Review: UN General Assembly High-Level Meeting – Comprehensive Summary — Cybercrime remains a serious concern that requires concerted collective solutions. Furthermore, there is a need to estab…
S88
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Global cooperation and dialogue is needed to build common frameworks
S89
UNSC meeting: Artificial intelligence, peace and security — China:Thank you, Mr. President. Mr. President, China welcomes you to preside over today’s Security Council meeting and t…
S90
Policymaker’s Guide to International AI Safety Coordination — Russell argues that global coordination on AI safety is essential because the potential harms, whether psychological dam…
S91
Pioneering Responsible Global Governance for Quantum Technologies — Shamira Ahmed:Thanks for joining us. So I’ll get right into the discussion. Xianghong, you have been working at UNESCO. …
S92
WS #45 Fostering EthicsByDesign w DataGovernance &amp; Multistakeholder — Rosanna Fanni: Thank you. Thank you very much. and also thanks for all my fellow panelists. I think a lot of things …
S93
DC-Sustainability Data, Access &amp; Transparency: A Trifecta for Sustainable News | IGF 2023 — Audience:OK, that’s a nice clarification. Hello, everyone. I’m Alice Lenna from Brazil. I’m also a consultant for GRI, t…
S94
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Dawit Bekele:Thank you. I agree with most of the things that have been said and in particular on the importance of havin…
S95
Opening Ceremony — ## Opening Context and UN Leadership – **Tawfik Jelassi** – Assistant Director General for the United Nations Education…
S96
Charting the Course: Discussing the Impact and Future of the Internet Governance Forum — Hence, there’s an argument for refining these entities to ensure sustained success, with policy networks potentially gai…
S97
Challenges and Opportunities: Emerging Technologies and Sustainability Impacts  — Martina Barbero:Thank you very much for the question. It is truly a very good question. I think we have seen from the pr…
S98
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — These key comments fundamentally transformed the discussion from a surface-level celebration of multi-stakeholder proces…
S99
OPENING STATEMENTS FROM STAKEHOLDERS — Regarding regulation, there is a call for innovative regulation that supports new technologies and practices while ensur…
S100
DigiSov: Regulation, Protectionism, and Fragmentation | IGF 2023 WS #345 — Bruna Santos:Yes, thanks a lot. Just about the priorities, I don’t know if it’s possible to say whether there’s a differ…
S101
AI in Mobility_ Accelerating the Next Era of Intelligent Transport — The discussion maintained a serious, urgent tone throughout, driven by the gravity of India’s road safety crisis. While …
S102
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S103
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S104
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S105
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S106
Opening remarks — Applied in 40 countries for national digital assessments, these indicators are under revision in a collaborative effort….
S107
WSIS Action Line C7 E-learning — – **Tawfik Jelassi** – Assistant Director General for Communication and Information at UNESCO Tawfik Jelassi, UNESCO’s …
S108
DC-Inclusion &amp; DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved — – Tawfik Jelassi: Assistant Director General of Communication and Information Sector of UNESCO Tawfik Jelassi: Good mo…
S109
WS #162 Overregulation: Balance Policy and Innovation in Technology — James Nathan Adjartey Amattey, from the private sector in Africa, pointed out that the COVID-19 pandemic demonstrated th…
S110
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — Dami, the founder of Shutlers, emphasises the daily challenges faced as an entrepreneur, highlighting the necessity to f…
S111
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — The analysis of the provided statements highlights several key points from all speakers. One main argument is that digit…
S112
Keynote-Julie Sweet — Thank you, Prime Minister Modi, Minister Vaishnav, and your outstanding teams for convening us for this critical summit …
S113
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Kapoor provided the most comprehensive vision: an inclusive AI transition featuring better, more productive jobs where a…
S114
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Gabriela Ramos, Assistant Director General for Social and Human Sciences at UNESCO, has highlighted the unique mandate o…
S115
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — It is essential to factor in potential risks to human rights and ensure accountability in the development processes of t…
S116
FOREWORD — Just as large, however, is the gap that persists in many cases between theory and practice. Despite the existence of man…
S117
WS #31 Cybersecurity in AI: balancing innovation and risks — Gladys Yiadom: have as recently witnessed the emergence of AI-enabled system at an incredible scale. Despite various …
S118
La découvrabilité des contenus numérique: un facteur de diversité culturelle et de développement (Délégation Wallonie-Bruxelles, Belgian Mission to the UN in Geneva) — The analysis also highlighted the importance of implementing ethical principles and existing consensuses on a global sca…
S119
Open Forum #33 Building an International AI Cooperation Ecosystem — Ethical Considerations and Inclusivity Pelayo argues that while AI progress is important, it must not compromise fundam…
S120
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Tawfik Jelassi
2 arguments156 words per minute961 words369 seconds
Argument 1
Innovation and ethics reinforce each other, not conflict
EXPLANATION
Dr. Jelassi argues that innovation and ethics are not opposed; instead, ethical reflection enhances trust and adoption of AI, thereby supporting further innovation. He stresses that regulation should enable rather than hinder creative development.
EVIDENCE
He states that he sees no contradiction between innovation and ethics, noting that regulation should not constrain creativity and that ethical design makes AI systems more trustworthy and widely deployed. He draws on his experience as former chair of a telecom operator board and references UNESCO’s 2021 recommendation as the only global ethical AI framework, which calls for human oversight, non-discrimination, cultural diversity and environmental sustainability [38-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S19 and S1 explicitly state that innovation and ethics should not be seen as trade‑offs and can be pursued together, corroborating the claim that they reinforce each other.
MAJOR DISCUSSION POINT
Innovation vs. Ethics
AGREED WITH
Tim Curtis, Brando Benifei
Argument 2
AI must be ethical by design, with human oversight and contextualisation
EXPLANATION
He emphasizes that AI systems should be built with ethical considerations from the outset, incorporating human oversight and adapting to specific contexts. This approach avoids post‑hoc fixes and aligns with UNESCO’s ethical principles.
EVIDENCE
Jelassi explains that AI must be ethical by design, with human oversight, non-discrimination, respect for cultural diversity and environmental sustainability, as outlined in the UNESCO recommendation adopted in 2021 [39-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S19 reports that Dr. Jelassi advocates for integrating ethical considerations from the very beginning of AI development, and S1 highlights the need for ethics‑by‑design with human oversight.
MAJOR DISCUSSION POINT
Ethics‑by‑Design
AGREED WITH
Debjani Ghosh, Paula Goldman, Virginia Dignam
T
Tim Curtis
1 argument73 words per minute339 words276 seconds
Argument 1
Balancing innovation with ethics is essential for human‑centred AI
EXPLANATION
Tim Curtis states that UNESCO’s mission is to promote AI development while ensuring it remains ethical and centred on human rights, especially for the Global South. He frames this balance as core to UNESCO’s engagement with AI ethics.
EVIDENCE
In his opening remarks he says UNESCO believes in “ensuring an ethical and human AI centred deployment whilst also encouraging the development of artificial intelligence… especially to the global south” [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S19 includes Tim Curtis’s opening remarks that UNESCO’s mission is to promote AI development while ensuring it remains ethical and human‑centred, and S1 repeats this framing.
MAJOR DISCUSSION POINT
Human‑Centred AI
AGREED WITH
Dr. Tawfik Jelassi, Brando Benifei
D
Debjani Ghosh
2 arguments164 words per minute1281 words466 seconds
Argument 1
Need concrete mechanisms to embed ethical oversight throughout the AI lifecycle
EXPLANATION
Debjani argues that ethical oversight must be integrated at every stage of AI development, from design through commercialization, using flag‑offs and sandbox testing to make ethics a by‑design feature rather than an afterthought.
EVIDENCE
She outlines that oversight should be built into the whole development process with “the right flag-offs at every part of the design and development process,” followed by sandbox testing, turning ethics into a design principle rather than an afterthought [61-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both S19 and S1 emphasize that oversight must be built into the entire development process—from design to commercialization—with flag‑offs and sandbox testing.
MAJOR DISCUSSION POINT
Operationalising Ethics
AGREED WITH
Maria Grazia, Brando Benifei, Paula Goldman
Argument 2
Democratise AI design by building capabilities in Tier‑2/3 cities and supporting local startups
EXPLANATION
Debjani highlights initiatives that empower developers outside major hubs, noting that many startups in Tier‑2/3 Indian cities are growing faster than those in Tier‑1, and that programs like Startup India aim to build design and development capacity in these regions.
EVIDENCE
She describes chairing a working group that produced the AI Impact Commons with stories from over 30 countries solving problems such as malnutrition and climate resilience, and notes that India’s Startup India programme focuses on building capabilities in Tier-2/3 cities, leading to higher startup growth in those areas [166-174] and [291-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S28 describes how innovation is emerging from Tier‑2 and Tier‑3 Indian cities and cites Startup India’s role, while S19 references the same initiative as evidence of democratising AI design.
MAJOR DISCUSSION POINT
Inclusive Capacity Building
AGREED WITH
Virginia Dignam, Paula Goldman, Rita Soni
M
Maria Grazia
1 argument164 words per minute1794 words655 seconds
Argument 1
Gap exists between UNESCO’s ethical principles and on‑ground implementation; practical tools are required
EXPLANATION
Maria points out that translating UNESCO’s high‑level ethical principles into concrete actions remains a major challenge, and that stakeholders need practical mechanisms to bridge this gap.
EVIDENCE
She remarks that “very often, what is very challenging is to go from principles to practice” and asks the ADG about the biggest gaps between UNESCO’s recommendation and what is happening on the ground [35-37].
MAJOR DISCUSSION POINT
Principles‑to‑Practice Gap
AGREED WITH
Debjani Ghosh, Brando Benifei, Paula Goldman
B
Brando Benifei
2 arguments119 words per minute947 words476 seconds
Argument 1
EU AI Act adopts a risk‑based approach, prohibiting harmful uses such as predictive policing
EXPLANATION
Brando explains that the EU AI Act classifies AI applications by risk level, imposing strict controls and outright bans on high‑risk uses like predictive policing and emotion‑recognition in workplaces, to safeguard human rights.
EVIDENCE
He lists the risk-based approach of the AI Act and cites specific prohibited use cases: predictive policing, emotional recognition in workplaces, and manipulative subliminal techniques [80-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S32 outlines the EU AI Act’s risk‑based classification and bans on high‑risk uses like predictive policing; S31 adds that the Act mandates transparency for high‑risk systems.
MAJOR DISCUSSION POINT
Risk‑Based Regulation
AGREED WITH
Maria Grazia, Debjani Ghosh, Paula Goldman
Argument 2
UNESCO’s global recommendation provides universal principles; multilateral cooperation is needed for issues like military AI
EXPLANATION
He argues that while national rules are essential, global challenges such as military AI and existential risks require coordinated multilateral frameworks, building on UNESCO’s universal recommendation.
EVIDENCE
He calls for global cooperation on issues like military AI, cites the need for common standards and notes UNESCO’s mission to build peace through education, culture and information, emphasizing that some AI risks cannot be addressed solely at the domestic level [191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S33 notes that UNESCO’s recommendation serves as a global normative foundation, and S19 calls for multilateral cooperation on challenges such as military AI.
MAJOR DISCUSSION POINT
Global Cooperation
AGREED WITH
Tim Curtis, Dr. Tawfik Jelassi
V
Virginia Dignam
3 arguments146 words per minute1372 words562 seconds
Argument 1
Innovation should move beyond the Western “hammer” mindset and incorporate diverse cultural traditions
EXPLANATION
Virginia critiques the narrow view of innovation that treats AI as a single tool (“hammer”) and calls for incorporating non‑Western epistemologies, such as the African Ubuntu tradition, to create different kinds of AI.
EVIDENCE
She contrasts the Western Cartesian, individualistic tradition (“I think, therefore I am”) with the African Ubuntu perspective (“we are, therefore I am”), arguing that this shift would produce fundamentally different AI systems [100-113].
MAJOR DISCUSSION POINT
Cultural Diversity in Innovation
AGREED WITH
Paula Goldman, Debjani Ghosh, Rita Soni
Argument 2
Engineers need humanities and social‑science training to ask why problems matter and who benefits
EXPLANATION
She stresses that technical education must be complemented by humanities and social‑science perspectives so engineers consider the purpose, beneficiaries and societal impact of AI solutions.
EVIDENCE
Virginia calls for engineers to be taught to ask “why is this a problem, who benefits, who loses,” and to integrate humanity into engineering practice, emphasizing the need for precise, socially aware design [123-128].
MAJOR DISCUSSION POINT
Humanities in Engineering
AGREED WITH
Dr. Tawfik Jelassi, Debjani Ghosh, Paula Goldman
Argument 3
AI policy concerns tools, capabilities, skills and impact management, not just technology design
EXPLANATION
She defines AI policy as the set of tools, capabilities, skills and impact‑management mechanisms needed to address AI’s societal effects, rather than focusing solely on technical design.
EVIDENCE
Virginia explains that AI policy is about “the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI,” and stresses asking why AI is being used and how its impact is evaluated [265-269].
MAJOR DISCUSSION POINT
Scope of AI Policy
R
Rita Soni
2 arguments161 words per minute167 words62 seconds
Argument 1
Inclusion of developers from low‑resource contexts is crucial to ensure relevance and fairness
EXPLANATION
Rita points out that many AI creators have never experienced infrastructure challenges common in low‑resource settings, and argues that involving such developers is essential for fair and relevant AI systems.
EVIDENCE
She observes that developers often “have never experienced a power cut or potholes,” and asks how to bring those who have faced such conditions into the design process to ensure inclusive AI [279-285].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S34 discusses the importance of equitable access and reparative practices in AI, supporting the need to involve developers from low‑resource settings.
MAJOR DISCUSSION POINT
Inclusive Development
AGREED WITH
Virginia Dignam, Paula Goldman, Debjani Ghosh
Argument 2
Provide opportunities for developers who face power cuts and infrastructure challenges to participate in AI creation
EXPLANATION
Rita reiterates the need to democratise AI design by giving developers in areas with unreliable electricity and connectivity the chance to contribute, thereby reducing bias and increasing relevance.
EVIDENCE
She repeats her earlier concern about power-cut-affected developers and asks how to support them as a movement to involve those who have experienced such constraints in AI development [279-285].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S34 again highlights the necessity of creating inclusive pathways for under‑represented creators facing infrastructural constraints.
MAJOR DISCUSSION POINT
Access for Under‑Represented Creators
P
Paula Goldman
2 arguments159 words per minute846 words318 seconds
Argument 1
Ethical AI must be built into products; trust and human‑centred transformation drive adoption
EXPLANATION
Paula argues that companies need to embed ethical safeguards into AI products to build trust, and that placing people at the centre of AI transformation is key to scaling and successful deployment.
EVIDENCE
She notes that organizations must be able to trust AI to work, that scaling requires putting people at the centre of transformation, and that ethical AI built into products drives adoption and avoids embarrassment [151-159].
MAJOR DISCUSSION POINT
Trust‑Driven Adoption
AGREED WITH
Maria Grazia, Debjani Ghosh, Brando Benifei
Argument 2
Inclusive design (e.g., accessibility, language variants) yields superior, commercially successful AI systems
EXPLANATION
She provides concrete examples showing that designing AI to handle diverse accents, vernaculars and accessibility needs not only improves inclusion but also leads to better performance and commercial success.
EVIDENCE
Paula cites Salesforce’s work on voice capabilities that must work across English accents, understand deaf accents, and overall product accessibility, arguing that inclusive design results in superior, more successful products [221-227].
MAJOR DISCUSSION POINT
Inclusive Design Benefits
AGREED WITH
Virginia Dignam, Debjani Ghosh, Rita Soni
A
Audience
1 argument101 words per minute45 words26 seconds
Argument 1
Clear definition of AI policy is needed to address AI’s societal impact
EXPLANATION
The audience member asks for a definition of AI policy, highlighting the need for a clear understanding of its scope to effectively manage AI’s societal consequences.
EVIDENCE
Rajan from the audience asks, “what is AI policy?” indicating a demand for a concise definition of the term [263].
MAJOR DISCUSSION POINT
Defining AI Policy
Agreements
Agreement Points
Innovation and ethics reinforce each other rather than being in conflict
Speakers: Tim Curtis, Dr. Tawfik Jelassi, Brando Benifei
Balancing innovation with ethics is essential for human‑centred AI Innovation and ethics reinforce each other, not conflict EU AI Act adopts a risk‑based approach, prohibiting harmful uses such as predictive policing
Tim Curtis states UNESCO’s mission is to promote AI development while ensuring it remains ethical and human-centred [2]. Dr. Jelassi explicitly says he sees no contradiction between innovation and ethics and argues they reinforce each other, noting that ethical design increases trust and adoption [38-40]. Brando also argues that innovation and ethics are not opposed and that regulation can balance both [75-78].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes the consensus reached at IGF 2023 and UNESCO’s AI ethics recommendation, which explicitly reject the narrative that ethics hinders innovation and instead argue that they are mutually reinforcing [S69][S70][S71].
Ethics must be embedded by design and throughout the AI lifecycle
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh, Paula Goldman, Virginia Dignam
AI must be ethical by design, with human oversight and contextualisation Need concrete mechanisms to embed ethical oversight throughout the AI lifecycle Ethical AI must be built into products; trust and human‑centred transformation drive adoption Engineers need humanities and social‑science training to ask why problems matter and who benefits
Jelassi stresses AI should be ethical by design with human oversight and contextualisation [39-41]. Debjani outlines concrete mechanisms such as flag-offs and sandbox testing to make ethics a by-design feature [61-68]. Paula describes building trust by embedding ethical safeguards into products and putting people at the centre of transformation [151-159]. Virginia calls for engineers to be trained in humanities to consider purpose and impact, linking ethics to design [123-128].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple WSIS and sector-specific reports stress that ethics should be a foundational, cross-cutting pillar from conception through decommissioning, and that governance must be embedded at every stage of the AI lifecycle [S50][S51][S52][S65].
Inclusive design and cultural diversity are essential for effective AI
Speakers: Virginia Dignam, Paula Goldman, Debjani Ghosh, Rita Soni
Innovation should move beyond the Western “hammer” mindset and incorporate diverse cultural traditions Inclusive design (e.g., accessibility, language variants) yields superior, commercially successful AI systems Democratise AI design by building capabilities in Tier‑2/3 cities and supporting local startups Inclusion of developers from low‑resource contexts is crucial to ensure relevance and fairness
Virginia critiques the narrow “hammer” view of innovation and urges inclusion of non-Western epistemologies such as Ubuntu [100-113]. Paula highlights the commercial and performance benefits of designing AI for diverse accents and accessibility needs [221-227]. Debjani points to initiatives that empower developers in Tier-2/3 Indian cities and the AI Impact Commons showcasing inclusive solutions [166-174][291-298]. Rita stresses the need to involve developers who have experienced power cuts and infrastructure challenges to make AI fairer [279-285].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on digital exclusion and culturally-sensitive AI policies highlight the need for inclusive design, multilingual support, and cross-cultural dialogue to ensure AI benefits diverse populations [S57][S58][S59].
Need for practical, operational mechanisms (regulation, risk‑based approaches, sandboxing) to translate principles into practice
Speakers: Maria Grazia, Debjani Ghosh, Brando Benifei, Paula Goldman
Gap exists between UNESCO’s ethical principles and on‑ground implementation; practical tools are required Need concrete mechanisms to embed ethical oversight throughout the AI lifecycle EU AI Act adopts a risk‑based approach, prohibiting harmful uses such as predictive policing Ethical AI must be built into products; trust and human‑centred transformation drive adoption
Maria notes the challenge of moving from principles to practice and asks about gaps [35-37]. Debjani proposes flag-offs and sandbox testing as concrete mechanisms [61-68]. Brando describes the EU AI Act’s risk-based classification and bans on high-risk uses as a regulatory tool [80-84]. Paula emphasizes practical steps companies take to embed ethics and build trust, enabling scalable adoption [151-159].
POLICY CONTEXT (KNOWLEDGE BASE)
Guidance from finance and governance bodies calls for risk-based AI policies, operational frameworks, and sandbox environments to move from high-level principles to actionable measures, with regulation positioned as an enabler rather than a barrier [S52][S65][S68][S67][S61].
Global cooperation and multilateral frameworks are essential for AI governance
Speakers: Tim Curtis, Dr. Tawfik Jelassi, Brando Benifei
Balancing innovation with ethics is essential for human‑centred AI Innovation and ethics reinforce each other, not conflict UNESCO’s global recommendation provides universal principles; multilateral cooperation is needed for issues like military AI
Tim frames UNESCO’s global role in balancing innovation and ethics for the Global South [2]. Jelassi references the UNESCO 2021 recommendation adopted by 193 member states as the only global ethical AI framework [39-41]. Brando calls for global cooperation on challenges such as military AI and stresses the need for common standards beyond national rules [191-199].
POLICY CONTEXT (KNOWLEDGE BASE)
International forums stress that AI’s borderless nature requires coordinated multilateral frameworks, such as UNESCO’s recommendation and the UN-led digital compact, while respecting national sovereignty [S55][S56][S66][S54].
Similar Viewpoints
Both argue that ethical considerations should be integrated early and continuously in AI development to support innovation, emphasizing mechanisms such as flag‑offs, sandbox testing and human oversight [38-40][61-68].
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh
Innovation and ethics reinforce each other, not conflict Need concrete mechanisms to embed ethical oversight throughout the AI lifecycle
Both stress that AI innovation must include perspectives and creators from diverse and under‑represented contexts to avoid bias and ensure relevance [100-113][279-285].
Speakers: Virginia Dignam, Rita Soni
Innovation should move beyond the Western “hammer” mindset and incorporate diverse cultural traditions Inclusion of developers from low‑resource contexts is crucial to ensure relevance and fairness
Both see regulation (risk‑based or product‑level safeguards) as a means to build trust and enable wider adoption of AI, rather than stifling innovation [80-84][151-159].
Speakers: Brando Benifei, Paula Goldman
EU AI Act adopts a risk‑based approach, prohibiting harmful uses such as predictive policing Ethical AI must be built into products; trust and human‑centred transformation drive adoption
Unexpected Consensus
Regulation can be a constructive tool rather than merely a prohibition
Speakers: Brando Benifei, Virginia Dignam
EU AI Act adopts a risk‑based approach, prohibiting harmful uses such as predictive policing Engineers need humanities and social‑science training to ask why problems matter and who benefits
While Brando emphasizes a risk-based regulatory framework that bans certain uses, Virginia argues that regulation should not be seen only as a prohibitive ‘finger’ but as an experimental, adaptive tool that can guide ethical development. Both converge on the view that regulation, when designed appropriately, supports ethical innovation rather than hindering it [80-84][123-128].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent regulatory approaches in the EU, UK and other jurisdictions illustrate how safety-focused rules can foster innovation, with regulators acting as proactive partners rather than obstacles [S61][S64][S70][S69].
Overall Assessment

The panel shows strong convergence on several fronts: (i) innovation and ethics are mutually reinforcing; (ii) ethics must be embedded by design with concrete mechanisms; (iii) inclusive, culturally diverse approaches are essential; (iv) practical regulatory and operational tools are needed to bridge principle‑to‑practice gaps; and (v) global, multilateral cooperation is crucial for responsible AI governance.

High consensus across speakers from UNESCO, academia, industry, and policy, indicating a shared commitment to human‑centred, ethically‑by‑design AI that is inclusive and supported by practical, risk‑aware regulatory frameworks. This consensus suggests that future initiatives can build on these common foundations to develop actionable policies, capacity‑building programmes, and international standards.

Differences
Different Viewpoints
Unexpected Differences
Takeaways
Key takeaways
Innovation and ethics are not opposing forces; they reinforce each other when AI is designed with human‑centred values from the start. UNESCO’s 2021 Recommendation provides universal principles (human rights, dignity, freedoms) but there is a clear gap between those principles and on‑the‑ground implementation; concrete tools and checkpoints are needed. Ethical AI must be embedded ex‑ante (by design) with continuous human oversight, contextualisation, and sandbox testing before deployment. The EU AI Act demonstrates a risk‑based regulatory model that permits innovation while prohibiting high‑risk or harmful applications (e.g., predictive policing, emotion‑recognition in workplaces). Diverse cultural traditions (e.g., African Ubuntu) should inform AI design, moving beyond the Western “hammer” mindset to a broader toolbox of approaches. Engineers and data scientists need humanities and social‑science education to ask why a problem matters, who benefits, and what is lost. Inclusive design (accessibility, multilingual support, varied user contexts) leads to superior, more commercially successful AI products. AI policy is about managing the societal impact of AI—tools, capabilities, skills, and governance—not merely the technical design of the technology. Democratizing AI development, especially in low‑resource settings (Tier‑2/3 cities, regions with infrastructure challenges), is essential for fairness and relevance.
Resolutions and action items
UNESCO to develop and disseminate practical operational guidelines and sandbox frameworks to translate its ethical principles into actionable steps. Launch and promotion of the AI Impact Commons (aiimpactcommons.global) as a platform for sharing impact stories and best practices across more than 30 countries. Call for enhanced multilateral cooperation on global AI challenges (e.g., military AI, existential risks) as articulated by the EU representative. Industry examples (e.g., Salesforce) to continue building escalation controls, real‑time accessibility fixes, and inclusive testing as models for other companies. India’s NITI Aayog to expand capacity‑building programmes (Startup India, tier‑2/3 city initiatives) that enable local developers to participate in AI design and development.
Unresolved issues
How to achieve alignment of ethical values across culturally diverse societies when a universal consensus is unattainable. Specific mechanisms for holding humans accountable for AI outcomes when bad actors exist and regulation is still an afterthought. Detailed pathways for operationalising UNESCO’s recommendation within varied national legal and regulatory frameworks. Effective strategies to include developers from low‑resource environments (e.g., those facing power cuts) in the AI design process. Clarification of the scope and enforcement mechanisms of AI policy beyond high‑level definitions.
Suggested compromises
Adopt a risk‑based regulatory approach (as in the EU AI Act) that allows low‑risk innovation while prohibiting or tightly controlling high‑risk uses. Integrate ethics early in the development lifecycle (ethical‑by‑design) rather than applying it retroactively, balancing speed of innovation with safeguards. Use a “toolbox” of methods (multiple frameworks, standards, and evaluation metrics) instead of a single “hammer” solution, accommodating diverse contexts. Combine formal regulation with industry self‑governance and standards to ensure compliance without stifling innovation.
Thought Provoking Comments
I don’t see a contradiction between innovation and ethics; rather, they reinforce each other because ethical design makes AI more trustworthy and widely adopted.
Challenges the common narrative that ethics hinders innovation and reframes ethics as a catalyst for broader adoption, shifting the debate from conflict to synergy.
Set the tone for subsequent speakers to discuss integration of ethics early in the design process, leading Debjani and Virginia to elaborate on operationalizing ethics by design.
Speaker: Dr. Tawfik Jelassi
The real choice is not between innovation and ethics, but between using technology to solve humanity’s biggest problems or to amplify conflict and harm; we cannot align everyone on the same ethical values, so accountability must remain with humans.
Broadens the conversation from regulatory frameworks to fundamental human agency, emphasizing that technology will always be used both ethically and unethically, and that responsibility lies with people.
Prompted Brando Benifei to discuss the need for risk‑based regulation and sparked a deeper discussion on accountability, influencing the panel to consider practical mechanisms for human oversight.
Speaker: Debjani Ghosh
Innovation is more than just applying a new hammer to any nail; it should challenge us to go further and be rooted in diverse cultural traditions (e.g., African Ubuntu) rather than the Western Cartesian view of ‘I think, therefore I am.’
Introduces a novel perspective on the cultural bias in AI development and calls for a redefinition of innovation that incorporates collective, non‑individualistic values.
Shifted the dialogue toward inclusivity and the importance of cultural context, leading participants like Paula Goldman and Debjani to discuss practical ways to embed diverse user needs into product design.
Speaker: Virginia Dignam
Regulation and ethics should not be seen as mere prohibitions or finger‑pointing; they are tools for experimentation, verification, and continuous improvement, much like the scientific method.
Reframes regulation as an enabling, iterative process rather than a static barrier, challenging the audience’s preconceived notions about compliance.
Encouraged Brando Benifei to elaborate on the EU AI Act’s risk‑based approach and inspired the group to consider sandbox testing and iterative oversight as part of ethical AI deployment.
Speaker: Virginia Dignam
We need to embed oversight into the entire development lifecycle with clear flag‑offs at each stage, turning ethics into a design‑by‑default principle rather than an afterthought.
Provides a concrete operational model for ethical AI, moving the conversation from abstract principles to actionable processes.
Guided Paula Goldman to describe Salesforce’s practical implementation (e.g., escalation protocols, human‑in‑the‑loop), illustrating how the proposed lifecycle approach can be realized in industry.
Speaker: Debjani Ghosh
The EU’s risk‑based AI Act deliberately prohibits certain high‑risk uses (e.g., predictive policing, emotional recognition at work) and promotes transparency to build trust, showing that regulation can coexist with innovation.
Offers a concrete example of balancing innovation and ethics through targeted prohibitions, countering the argument that regulation always stifles progress.
Validated Dr. Jelassi’s and Debjani’s points about the necessity of early ethical safeguards, and reinforced the theme that selective restriction can protect fundamental rights while allowing other innovations to flourish.
Speaker: Brando Benifei
AI can be a force for good or harm depending on how humans use it; technology itself is neutral, but its impact hinges on human choices, education, and capacity building.
Reasserts the centrality of human agency and the importance of education and awareness, linking back to UNESCO’s mission of building peace through knowledge.
Steered the discussion toward capacity‑building initiatives, prompting Debjani to mention the AI Impact Commons and highlighting the role of knowledge sharing in bridging the global north‑south divide.
Speaker: Dr. Tawfik Jelassi
Education must integrate humanities and social sciences with engineering to ensure that developers ask ‘why’ and consider who gains or loses, because AI is currently an empty signifier that risks being treated as magic.
Identifies a systemic gap in AI development—lack of interdisciplinary training—and proposes a solution that deepens the conversation about responsible innovation.
Influenced the panel to discuss concrete educational initiatives and the need for a broader skill set, echoing Debjani’s call for inclusive capacity building and Paula’s emphasis on user‑centric design.
Speaker: Virginia Dignam
The AI Impact Commons (aiimpactcommons.global) aggregates impact stories from over 30 countries, showing that developing nations are already using AI to solve concrete problems like malnutrition and climate resilience.
Counters the narrative that ethical AI is a luxury of the Global North by providing evidence of meaningful AI applications in the Global South.
Reoriented the conversation toward equity and scalability, reinforcing the panel’s emphasis on inclusive innovation and prompting further discussion on how to support resource‑constrained contexts.
Speaker: Debjani Ghosh
Overall Assessment

The discussion was shaped by a series of pivotal interventions that reframed the perceived tension between innovation and ethics. Early remarks by Dr. Jelassi and Debjani Ghosh shifted the narrative from conflict to synergy and human accountability, setting the stage for deeper exploration of operationalizing ethics. Virginia Dignam’s culturally‑grounded critique of ‘hammer‑and‑nail’ innovation and her redefinition of regulation as an experimental tool introduced fresh dimensions of inclusivity and iterative governance. Brando Benifei’s concrete illustration of the EU AI Act’s risk‑based prohibitions provided a real‑world model of balanced regulation, while Paula Goldman’s practical examples demonstrated how these principles can be embedded in corporate practice. Collectively, these comments moved the panel from abstract policy talk to actionable strategies—lifecycle oversight, interdisciplinary education, and global impact sharing—thereby enriching the dialogue and highlighting pathways for responsible AI development across diverse contexts.

Follow-up Questions
What are the biggest gaps between AI ethics principles and their implementation on the ground?
Identifies the challenge of translating UNESCO’s 2021 AI Ethics Recommendation into concrete actions in different contexts.
Speaker: Maria Grazia (moderator)
What mechanisms can embed ethical reflection into the everyday operations of companies and sectors?
Seeks concrete tools and processes for operationalising AI ethics within industry and public sector workflows.
Speaker: Maria Grazia (moderator) – directed to Debjani Ghosh
How does the EU risk‑based AI regulatory approach relate to human oversight and redress mechanisms for ethical AI?
Aims to clarify the alignment between regulatory risk categories and practical safeguards such as human‑in‑the‑loop and remediation pathways.
Speaker: Maria Grazia (moderator) – directed to Brando Benifei
How can we prevent people from being merely consumers of AI and instead enable them to steer technology toward desired societal outcomes?
Addresses the need for empowerment, participation, and agency of citizens in shaping AI development rather than passive use.
Speaker: Maria Grazia (moderator) – directed to Virginia Dignam
How can companies translate AI ethics principles into concrete models that deliver results and serve people?
Looks for practical frameworks, metrics and product‑level implementations that align business goals with ethical standards.
Speaker: Maria Grazia (moderator) – directed to Paula Goldman
What does putting people at the centre of AI mean in practice, especially regarding information awareness and capacity building?
Explores operational steps (e.g., awareness campaigns, capacity development) needed to make AI truly human‑centred.
Speaker: Maria Grazia (moderator) – directed to Dr. Tawfik Jelassi
How can multilateral cooperation be strengthened to address global AI challenges such as military use, existential risks and AI for peace?
Calls for international governance mechanisms beyond national regulation to manage high‑stakes AI applications.
Speaker: Maria Grazia (moderator) – directed to Brando Benifei
What is AI policy and how should it address the impact of AI across its lifecycle?
Seeks a clear definition and scope of AI policy that focuses on impact assessment, not just technology design.
Speaker: Rajan (audience) – addressed to Virginia Dignam
How can we involve and support developers who experience infrastructure challenges (e.g., power cuts, poor connectivity) to design inclusive AI?
Highlights the need for inclusive participation of engineers from under‑served regions in AI design and development.
Speaker: Rita Soni (audience) – addressed to panel (Debjani Ghosh responded)
How can the UNESCO AI Ethics Recommendation be operationalised in diverse national and local contexts?
Points to the research gap on translating global principles into country‑specific policies, standards and implementation tools.
Speaker: Dr. Tawfik Jelassi (UNESCO ADG) and Maria Grazia
What effective sandbox‑testing frameworks can be used to evaluate ethical AI before large‑scale deployment?
Calls for research on pre‑deployment testing environments that assess bias, safety and societal impact.
Speaker: Debjani Ghosh
What are the impacts of AI initiatives in developing countries as captured in the AI Impact Commons, and how can successful models be scaled globally?
Suggests systematic study of documented impact stories to identify replicable solutions for issues like malnutrition, health, climate resilience.
Speaker: Debjani Ghosh
How can AI be designed respecting non‑Western epistemologies (e.g., Ubuntu) rather than solely Cartesian, individualistic frameworks?
Calls for research into culturally grounded AI paradigms that embed collective values and alternative notions of intelligence.
Speaker: Virginia Dignam
What metrics can assess the trade‑off between rapid innovation and ethical safeguards?
Identifies the need for quantitative or qualitative indicators to monitor whether ethical controls hinder or enable innovation.
Speaker: Multiple panelists (implicit)
How can education curricula integrate humanities and social sciences to improve engineers’ ethical awareness and decision‑making?
Highlights the research need for interdisciplinary training programs that embed questions of purpose, beneficiaries and societal impact.
Speaker: Virginia Dignam
What best practices ensure AI accessibility across languages, accents, disabilities and other diverse user contexts?
Calls for systematic study of inclusive design techniques, such as real‑time accessibility tools, to improve product performance and market success.
Speaker: Paula Goldman
What governance structures are needed to keep accountability with humans rather than delegating it to AI systems?
Points to the need for frameworks that embed human responsibility throughout the AI lifecycle, from design to post‑deployment.
Speaker: Debjani Ghosh
How can global AI standards be harmonised while allowing for local contextualisation and flexibility?
Seeks research on balancing universal norms with country‑specific adaptations to avoid regulatory fragmentation.
Speaker: Brando Benifei and UNESCO representatives
What are the risks of unregulated AI in high‑impact domains (e.g., predictive policing, emotional recognition) and how can they be mitigated?
Calls for detailed risk assessments and policy recommendations for prohibited or tightly regulated use‑cases.
Speaker: Brando Benifei
How can AI for peace be operationalised in geopolitically tense environments?
Identifies a research gap on mechanisms to deploy AI for conflict prevention, humanitarian assistance and diplomatic confidence‑building.
Speaker: Brando Benifei

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How nonprofits are using AI-based innovations to scale their impact

How nonprofits are using AI-based innovations to scale their impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Manohar Sreekanth, opened by having an AI introduce the participants and set the stage for a discussion on an AI-focused cohort program for nonprofits [5-7][25-28][30-34][36-38][42-44]. The program, run from September to December, was a four-month initiative anchored by Project Tech for Dev that brought together seven NGOs to develop AI use cases [50-53]. Its purpose was to explore the genesis, structure, and learnings of the cohort model and to derive takeaways for other nonprofits [54-59].


Erica Arya explained that Tech 4 Dev had previously run data-catalyst cohorts and recognized that hands-on mentorship was needed because many NGOs lack engineering capacity and need guidance to move from pilot to scale [64-73][74-82]. She described the open-application process, screening, free pilot status, and the requirement for leadership buy-in and a dedicated internal resource to anchor the effort [85-93]. Tamina Madon linked the cohort approach to startup incubators such as Y Combinator, arguing that pooling scarce technical and product-management resources (e.g., a pool of ten part-time AI engineers) makes support more efficient for NGOs that cannot hire full-time staff [106-118][123-129]. She also highlighted the difficulty of adapting Silicon-Valley product thinking to socially-impact work, noting the scarcity of product managers in the sector [130-138].


Mainak Roy described a WhatsApp-based chatbot for teachers that aimed to suggest evidence-based pedagogical strategies, but encountered technical glitches such as the bot not recognizing an initial “hi” greeting and needed guardrails on permissible queries [155-163][241-246]. Steven Suting outlined the learner-centric problem of heterogeneous student needs and teacher shortages, emphasizing that their solution seeks to emulate human signals rather than merely disseminate information, turning the effort into a behavior-science challenge [166-174][258-266]. Preetam Sukumar shared that his AI-generated student-report summaries are in a pilot-to-rollout phase, have been used by about 15 teachers, and suffer from LLM hallucinations that sometimes invert performance metrics, prompting ongoing prompt-engineering [185-200][207-214][216-227]. Across the cohort, participants reported that collaboration-facilitated by knowledge partners Digital Future Labs and Statle for responsible AI and AI safety-provided blueprints that enabled rapid prototyping, “fail-fast” learning, and avoidance of duplicated effort [300-308][319-323][350-359].


The panelists agreed that NGOs should start from concrete pain points rather than chasing AI for its own sake, leverage existing open-source platforms, and consider building product-management and evaluation capabilities such as the four-level impact framework described by Tamina [400-404][268-277][421-423]. They also warned against “build-first” mentalities, noting that many organizations were independently creating similar solutions (e.g., predictive models for high-risk pregnancies) that could be shared through the cohort [405-410][368-382]. Overall, the discussion concluded that the AI cohort model successfully lowered technical barriers, fostered responsible AI practices, and generated actionable insights for scaling AI in the social sector [455-456].


Keypoints


Major discussion points


The AI Cohort program – genesis, structure and objectives – The panel was framed around a four-month AI cohort run by Project Tech for Dev, designed to help NGOs build AI use-cases by providing mentorship, resources and a shared learning environment [50-54]. Erica explained that the program grew out of earlier “data-catalyst” work, that a cohort-based model enables deep learning and peer-sharing, and that the pilot brought together seven NGOs through an open-application, screening and leadership-buy-in process [60-71][79-84].


Why funders adopted a cohort model – Tamina described how the “agency fund” borrowed the startup-incubator/cohort philosophy (e.g., Y Combinator) to reduce isolation, provide shared technical staff and product-management resources, and achieve economies of scale for NGOs that cannot afford full-time AI engineers [106-118][123-129].


Non-profit use-cases and on-the-ground challenges


Teacher support via WhatsApp: Mainak highlighted the problem of teachers needing context-specific pedagogical guidance and the solution of a WhatsApp-based chatbot, noting early glitches with the “hi” greeting and the need for guardrails [151-163][241-246].


Learner-centred personalization: Steven outlined the heterogeneity of learners, teacher-student ratios, and the ambition to build a system that emulates human cues for behavior change [165-180][258-267].


AI-generated mentorship reports: Preetam described an AI-summarised report to replace teacher-student conversations, reporting pilot metrics, positive time-saving feedback, and issues with LLM hallucinations that required prompt-engineering [185-213][216-228].


Mentorship, knowledge partners and responsible AI – Erica detailed the inclusion of two knowledge partners (Digital Future Labs and Statle) to embed responsible-AI and AI-safety principles from day one, and the broader mentorship network that guided the NGOs [290-304]. Manohar later noted concrete safety tools (slur-list plugins, guardrails) that were considered for the bots [324-330].


Key learnings and recommendations for NGOs – Panelists converged on practical advice: focus on real pain points rather than hype (Preetam) [400-404]; avoid “jump-on-AI” bandwagon and assess existing solutions first (Erica) [426-433]; recognize duplication across NGOs and foster early collaboration (Erica’s health-NGO example) [368-381]; and invest in evaluation frameworks (golden data set, impact evaluation) to ensure responsible, scalable AI [415-419].


Overall purpose / goal of the discussion


The session aimed to showcase the design, implementation and outcomes of the AI cohort program, explain why a cohort-based approach was chosen by both the funder and the implementing organization, and surface concrete experiences, challenges and lessons learned from participating NGOs so that other social-sector actors can adopt or adapt similar AI initiatives [50-54][60-71][79-84].


Tone of the discussion


Opening: Light-hearted and experimental, with the moderator joking about letting an AI introduce the panelists [5-7].


Transition: Becomes formal and informational as the moderator and Elizabeth introduce panelists [25-34].


Deep dive: Shifts to a collaborative, problem-solving tone as participants describe their projects, technical hiccups and mentorship dynamics [151-213][258-267].


Reflective: Moves toward a reflective and advisory tone when discussing responsible AI, evaluation frameworks and broader ecosystem learning [290-304][324-330][368-381].


Closing: Concludes on an encouraging, forward-looking note, emphasizing shared take-aways and next steps for the audience [400-433][455-456].


Overall, the conversation evolves from a playful introduction to a substantive exchange of technical experiences and strategic insights, ending with actionable guidance for NGOs embarking on AI journeys.


Speakers

Manohar Sreekanth – Partner and CTO at Satva Consulting; moderator of the session; works at the intersection of technology, data, and AI for social-sector impact. [S6]


Elizabeth – AI assistant used to introduce panelists; no specific professional role mentioned. [S12]


Erica Arya – CEO of Project Tech for Dev, which develops open-source tech platforms and provides tech advisory for NGOs. [S8]


Temina Madon – Co-founder of the Agency Fund; focuses on technology, philanthropy, and economic mobility to create social impact at scale. [S3][S5]


Mainak Roy – Co-founder and CEO of Simple Education Foundation; works on AI-enabled teacher support tools. [S11]


Steven Suting – Director of Technology and Product at Quest Alliance; leads AI-powered digital learning platforms for youth in India. [S1]


Pritam Sukumar – Technology and Research Leader at Antifellows; drives tech and research efforts to uplift public-school students’ outcomes across India. [S9]


Additional speakers:


None identified beyond the listed participants.


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Manohar Sreekanth welcoming the audience and joking that the panelist introductions would be handled by an AI assistant; after a brief technical glitch that muted the AI, he asked Elizabeth to speak louder and assist with introductions [1-19]. Using Elizabeth’s assistance, the moderator introduced each participant in turn. Tamina Madon was presented as the co-founder of the Agency Fund, working at the intersection of technology, philanthropy and economic mobility [25-28]. Erica Arya was introduced as CEO of Project Tech for Dev, a group that collaborates with NGOs across the Global South to provide tech and data solutions [30-34]. Mainak Roy was described as co-founder and CEO of Simple Education Foundation, partnering with Indian state governments and exploring AI-supported teaching [32-33]. Steven Suting was named director of technology and product at Quest Alliance, leading AI-powered digital learning platforms for Indian youth [34]. Finally, Preetam Sukumar, technology and research lead at Antifellows, was introduced as the person driving data-driven platforms for public-school students [36-38]. Manohar’s own role as moderator and partner and CTO at Satva Consulting was then summarised [42-44].


The core of the discussion centred on a four-month cohort-based AI programme that ran from September to December and was anchored by Project Tech for Dev [50-53]. The programme brought together seven NGOs to develop AI use-cases, with the aim of exploring the programme’s genesis, structure, learnings and practical take-aways for other nonprofits [54-59]. Manohar began the deep dive by asking Erica to outline the origins of the cohort, the reasons for its design and its intended outcomes [55-58].


Erica explained that the cohort grew out of earlier “data-catalyst” initiatives, where deep, small-scale programmes with a handful of NGOs yielded valuable field-level insights that fed back into the open-source platforms they built [64-71][72-73]. She highlighted that hands-on mentorship was essential because many NGOs lack engineering capacity and need guidance to move from pilot to scale [68-82]. The cohort was open-application, with a screening process that ensured each NGO’s use-case fit the programme, that leadership bought in, and that a dedicated internal resource could anchor the effort for the four-month period [85-93].


Tamina then described why the Agency Fund adopted a cohort model, drawing on the startup-incubator philosophy of Y Combinator and the South-Park Commons community [106-118]. By pooling scarce technical staff-ten part-time AI engineers and product-management experts-across multiple NGOs, the fund could provide resources that individual organisations could not afford as full-time hires [123-129]. She noted that product-management talent is especially rare in the social sector, making the shared-resource approach a critical efficiency gain [130-138].


The panel moved to concrete education-focused pilots. Mainak Roy explained that Simple Education built a WhatsApp-based chatbot to give teachers evidence-based pedagogical recommendations for each class [155-163]. Early technical glitches arose when the bot failed to recognise a simple “hi” greeting, forcing the team to redesign the flow to treat any first message as a greeting and to impose guardrails on permissible queries [241-246]. This highlighted the gap between AI design assumptions and real-world user behaviour.


Steven Suting outlined Quest Alliance’s problem space: heterogeneous learner needs, severe teacher-student ratios (often 1:30) [175-180], and the limitation of traditional “chalk-and-talk” methods [166-174]. Their solution aims to emulate human cues-tone, confidence, speed of speech-to drive behaviour change, turning the technical challenge into a behaviour-science problem that requires responsible-AI safeguards [258-266].


Preetam Sukumar described Antifellows’ AI-generated student-report summaries, intended to replace the teacher-student mentorship conversation for low-touch online learners [185-200]. In the pilot phase, about 15 teachers have used the scripts for 57-75 conversations, reporting significant time savings [207-214][216-219]. However, large-language-model hallucinations-correct numbers paired with incorrect directionality (e.g., “decreased” instead of “increased”)-have eroded trust and required intensive prompt-engineering and the creation of a “golden” data set to align outputs; the team noted that mitigating hallucinations and building a “golden” data set remain ongoing challenges [220-227][418-419].


Erica highlighted the role of two knowledge partners-Digital Future Labs (responsible AI) and Statle (AI safety)-engaged from the outset of the cohort to embed responsible-AI and AI-safety principles rather than as an afterthought [290-307][300-307]. Mentors from Project Tech for Dev, together with the pooled technical staff, guided each NGO through design, implementation and evaluation [310-312][236-239]. Manohar later asked whether any participants had actually deployed guard-rail plugins such as slur-lists; the panel did not report concrete deployment of guard-rail plugins (e.g., slur-lists), so the operational status of these AI-safety tools remains unclear [324-330][331-336].


Across the discussion, several points of consensus emerged. All speakers agreed that the cohort model enables deep learning, peer sharing and efficient resource pooling for NGOs [64-71][123-129][241-246][258-266]; that responsible-AI principles should be baked in from day one via partnerships and mentorship [300-307][236-239][258-266]; that NGOs should first identify concrete pain points rather than chase AI hype [400-404][360-366][410-418]; that leveraging existing open-source platforms before building custom solutions is more pragmatic, as noted by Erica’s closing comment and Mainak’s use of WhatsApp [426-433][155-163]; and that systematic evaluation frameworks-such as Tamina’s four-level model and the “golden” data set-are essential for measuring social impact [269-279][418-419][324-330].


Moderate disagreements were also noted. Erica emphasized using existing open-source platforms where possible, while Mainak described a custom WhatsApp-based chatbot and Steven described a bespoke AI-driven learning platform tailored to heterogeneous learner needs [155-163][166-174][426-433]. Tamina’s emphasis on a pooled technical-staff model contrasted with Erica’s mentor-centric description, which did not explicitly mention shared staff [124-129][310-312]. Finally, while Erica described formal partnerships to ensure responsible AI, Steven focused on internal design considerations, and Manohar’s query about concrete guard-rail usage revealed uncertainty about the actual implementation of safety mechanisms [300-307][258-266][324-330].


Several thought-provoking comments shaped the direction of the dialogue. Erica’s observation that their platforms are “always listening from the ground” underscored a user-centred, iterative design philosophy [64-71]. Tamina’s comparison of the Agency Fund’s cohort to Y Combinator introduced the idea of shared technical resources for the nonprofit sector [106-118]. Mainak’s anecdote about the chatbot’s failure to recognise a “hi” highlighted the importance of flexible conversational design [155-163][241-246]. Preetam’s experience with hallucinations drew attention to the trust challenges of LLMs [220-227]. Steven’s reframing of the problem as a behaviour-science issue expanded the discussion beyond pure software engineering [258-266]. Finally, Tamina’s four-level evaluation framework illustrated a depth of impact measurement rarely seen in the private sector [269-279].


The panel highlighted several practical take-aways. A cohort-based AI programme can lower technical barriers, foster peer learning and embed responsible-AI safeguards [70-74][300-307]. Pooling part-time AI engineers and product managers is more cost-effective than each NGO hiring full-time staff [123-129]. Structured mentorship, open applications and leadership buy-in are essential for commitment [85-93][236-239]. Education pilots demonstrated concrete challenges: chatbot onboarding flows, LLM hallucinations and the need for prompt-engineering and “golden” data sets [241-246][220-227][418-419]. Cross-NGO collaboration prevented duplicated effort, as seen when health NGOs shared predictive-pregnancy models [368-382]. Participants urged NGOs to start from real pain points, assess existing open-source solutions (which often meet 60-70 % of needs) before custom-building, and integrate rigorous evaluation from the beginning [400-404][426-433][155-163][269-279].


Unresolved issues remained. The optimal balance between building custom solutions and adopting existing models (e.g., Gemini or Claude) was not decided [443-447]. Specific metrics for measuring behaviour-change impact in Quest’s solution were not fully addressed [258-266][350-359].


In closing, Manohar thanked the panel, invited the audience-many of whom were from NGOs-to share their own learnings, and offered to provide documentation, blogs and the evaluation framework for anyone interested in replicating the programme [455-456]. Overall, the AI cohort demonstrated that targeted mentorship, pooled technical resources, and early responsible-AI practices can accelerate nonprofit AI adoption while highlighting challenges such as LLM hallucinations and the need for robust evaluation frameworks [64-71][123-129][220-227][269-279].


Session transcriptComplete transcript of the session
Manohar Sreekanth

Not sure why exactly you came here, but I’m glad you’re here. We’ll make it worthwhile. So my name is Manohar, and I’ll be moderating this session. Usually what happens is in the beginning of such a panel discussion, you know, we tend to introduce the panelists. And since we are at an AI summit, so I thought, and we are all in the habit of delegating things to AI these days, I thought, why not let the AI itself introduce some of the panelists, right? We don’t know what they’re going to say. Hello, Elizabeth. How are you doing today?

Elizabeth

Hi, Manohar. I’m doing great and ready to help. What’s on your mind today?

Manohar Sreekanth

You couldn’t hear that, right? One second. Can you speak a little louder, Elizabeth?

Elizabeth

Of course, I can definitely speak up for you. Just let me know what you need, and I’ll match your pace.

Manohar Sreekanth

All right. No, right? Can’t hear. Let’s try again. And I can hear you. Usually this should work.

Elizabeth

Yes, absolutely. This usually works just fine. If there’s anything specific you want to try again or clarify, just let me know.

Manohar Sreekanth

Okay, Elizabeth, I am sitting in front of a panel right now and I need your help to introduce the panel members. Can you help me do that?

Elizabeth

Absolutely. I’ve got their profiles right here. Just let me know how you want to frame it and I can help you introduce them smoothly.

Manohar Sreekanth

Okay, let’s start with Tamina.

Elizabeth

So, introducing Tamina Madon. She’s the co -founder of the agency fund. Tamina works right where technology, philanthropy and economic mobility. She invests in services that empower human agency. create social impact at scale

Manohar Sreekanth

all right thank you next up is erica

Elizabeth

next we have erica aria she’s the ceo of project tech for dev which collaborates across the global south to help non -profits use tech and data at scale under her leadership they’ve supported over 200 ngos with open source platforms and advisory

Manohar Sreekanth

very nice thank you next we have my neck

Elizabeth

now we have my neck roy he’s the co -founder and ceo of simple education foundation my neck partners with state governments across india to enhance teaching and learning at scale he’s also exploring how ai can responsibly support teachers and public education

Manohar Sreekanth

thank you next up is steven

Elizabeth

up next is steven suiting you he’s the director of technology and product at quest alliance steven leads ai powered digital learning platforms that reach youth across india using data -driven innovation to improve learning outcomes.

Manohar Sreekanth

Thank you. And then we have Preetam Sukumar.

Elizabeth

And he is Preetam Sukumar. He’s the technology and research leader of Antifellows. Preetam drives tech and research efforts to use data and digital platforms to uplift public school students’ life outcomes across India.

Manohar Sreekanth

Excellent. And now finally we have this chap moderating this session. His name is Manohar. Can you tell me something about him?

Elizabeth

And finally our moderator, Manohar Srikanth. He’s partner and CTO at Satva Consulting. Manohar works at the intersection of tech, data, and AI, creating solutions that help social sector organizations scale their impact.

Manohar Sreekanth

Excellent, Elizabeth. I think you did a stellar job. Thank you so much. You wanted to do the panel for us. Next year, I’m sure, we’ll have some. digital assistant sitting here. But anyway, I hope that was interesting and it did a good job as well. So the main theme for this discussion is AI cohort program that was run from September to December. It was about a four -month program, which was actually anchored by Project Tech for Dev. And about seven nonprofits built their AI use cases in the course of these four months. So we’re going to talk about the genesis of the program, why it was structured that way, and also some of the learnings and experiences.

And finally, depending on, I’m not sure how many NGOs are there, but if you want some takeaways on how a nonprofit can also take away approaches towards AI, that’s also something which we’ll cover. I’m going to begin with Erika, because Project Tech for Dev was the entity which came up with the idea for such a program. Erika, can you give us a brief overview? of maybe start a little bit about Tech4Dev itself, right? And then why did you think about such a program in a cohort model, right? And what are the genesis, what are the motivation for this? And something about the structure also.

Erica Arya

Thanks, Manoj. So, good morning everyone. Hello. So, good morning everyone. The AI did a bit of introduction for me. And I’ll just add a little human element to it. So, yes, we at Project Tech4Dev, we develop open source tech platforms for the social sector. That’s one part of our work. And in addition, we do a lot of tech advisory for the non -profits. And as we had been working with, you know, more than 200 non -profits in India mostly, what we realized was that yes, we have these open source platforms which nonprofits come, they can use it as a SaaS platform, and we also provide consulting. And through our tech advisory, what we learned was that there are times when, you know, you really need to handhold nonprofits in taking that first step towards technology or those who have already taken that step to help them move to that next level.

And we have in past, you know, run a data catalyst program also, which was in collaboration with Dasra, and I’m just assuming you all would know this name, in the social sector. And they’ve been running these cohort -based programs, and we felt that, you know, when we do these deep -rooted programs with a small set of NGOs, we’re able to learn a lot from them. And that actually… That actually feeds into, you know, the platforms that we built. Because the way we… our platforms is not like a top -down, but it is always listening from the ground what the non -profits need, and I feel these cohort -based programs, at least for Tech4Dev, enables us to get those learnings.

In addition to that, you know, even this small cohort that comes together, there is a lot of peer learning and sharing that happens. It helps us see the commonalities across the use cases the NGOs are working on, and it helps them to further collaborate on them. So why do you rebuild things? Or if we are building new things, can we make it for the sector? Can we reuse the intellectual property that we have, you know, working with one or two other partners in the cohort? How can we leverage that and make it available more for the sector? So learning from our data catalyst program, which was a cohort -based program, and a program that agency funded, was also running on AI.

and it was called AI for Global Development, we felt that maybe while agency fund program was working more with the nonprofits who were much ahead in their journey in technology and in using AI, we felt that can we come down a level and work with those nonprofits who were wanting to take that step of integrating AI into their work. So they had a good use case, but maybe funding was a challenge, or maybe it was the resources were a challenge, because in the sector, especially in the NGO space, we know that having engineering resources is a very big challenge, and many nonprofits do not have that. So is that a barrier for them to take that step towards adopting AI?

So we wanted to bring down those barriers by introducing mentors into this program. So just… I’ll quickly tell you about the structure of the program. It was like we got… seven NGOs together. It was an open application. Screening was done. We had calls with the NGOs to really understand that their use case was really fitting the program. It was a free program for the NGOs because as Tech4Dev, what we believe, it was a pilot for us. So when we are running a pilot, we are learning together in it. And if we have the funds to encourage these NGOs to come together, then we should play that role. So there was a proper evaluation of the use case, also ensuring that the nonprofit has a resource who can devote time to running this for the period of the four months.

So who could anchor it within the organization. There was leadership buy -in into onboarding onto the program because for technology to get integrated, it is very important that the leadership buy -in is there because there is investment that Tech4Dev has. So I think that’s what we’re trying to do. the reason why we felt that can we, you know, just unlock these couple of barriers that these non -profits have and today we have three of such non -profits on this panel who are going to share with you, you know, what does this cohort enable for them. So I think, yeah.

Manohar Sreekanth

Right. Thank you, Erica. I mean, you talked a lot about the goals and the intent. We will hear from the non -profits how much of that intent also was realized, I think. And you can be candid, I’m sure, all of you. Right. We’ll move on to Tamina now. Tamina from the agency fund. Do give a brief overview of agency fund itself. But also talk to us about the motivation of funding such a cohort model because, you know, typically funding you can give to specific non -profits themselves for the AI use cases. But this is a different model. What is the motivation for that? Maybe we

Temina Madon

Sure. Thank you, everyone. I live in Silicon Valley and I started life as an engineer. I’ve never pursued that career. Over the last 20 years. One thing I saw in Silicon Valley is that a lot of new ventures are started in cohorts. Y Combinator is a quite famous incubator for profitable companies. I was working before we started the agency fund in South Park Commons, which is a venture capital fund and tech community. There’s one in Bangalore, one in San Francisco, one in New York City. And the way SPC operates is to bring founders together at the earliest stages in their journeys as they’re trying to figure out what to build and how to build it and who the user will be.

And that’s a very lonely journey. It is filled with frustrations and failures, and it helps to have a cohort of colleagues in the same stage of life or in the same stage of product development. And so we brought that philosophy into the agency fund when we started, that we bring together cohorts for accelerators. And we have participated in a lot of accelerators, including the data catalyst program that Tech4Dev has, which we have done. run the AI for Global Development Accelerator as well. The AI one is a year long for us and we bring established companies, non -profit companies together, but they’re all building AI for the first time. So it’s that same experience of needing to understand your user, figuring out what to build and going through the practice of putting together pipelines, experimenting, experimenting, iterating, refining.

It’s a journey and it’s difficult. And so I do think that the cohort support helps a lot. The other thing is as a funder, our perspective has been that while we don’t know everything, there are certain resources that are hard for non -profits to pay for themselves. And if we can create those resources in a pool format and make them available, we can perhaps be more efficient. For example, not every NGO is ready to hire two full -time AI engineers. Maybe they only need a part -time person right now as they’re building out the pipeline. So we’ve put together a pool of, of 10 technical staff who sit alongside the organizations that we fund in the accelerator.

We include product management in that as well. We’ve just added some additional people in product management because the AI technology is actually easy, especially there are a lot of talented engineers in India and elsewhere. What is difficult is to fit the technology to the pain points we all experience in life and build a product that achieves social impact. That’s really difficult. It’s so much further from what the typical Silicon Valley startup has to deal with. They just need to make money from people who already have money, right? We are trying to do something much more difficult with a population that’s much more excluded. And so, you know, there aren’t a lot of product managers in nonprofits.

If you’re in a nonprofit, raise your hand if you have a product manager. Oh, I’m surprised. Okay, there’s a decent number. But it’s not a role that commonly has been found in nonprofits. This is probably a… a special group. Some of them are from the organizations in the panel.

Manohar Sreekanth

Okay, fair enough. Not counted. Anyway, thank you, Tamina. Alright, so that set the stage. So now you got a sense of why the organization actually, both these organizations thought of a cohort model, why they set it up and so on. Now let’s get into the trenches in terms of the experience of the non -profits themselves, right? So it would be good to start with the problem space, right? So what problems were you trying to solve in the context of this particular program? And at a high level, how did AI come into the picture and overview? We’ll get to the experiences and learnings a little later, but just to set the context of what problems because again, we should remember that AI is a means to an end, right?

So understanding the problem that you’re trying to solve and was it scale, was it inclusion, was it reach, right, in the context of your program would be good to understand. Let’s start with you, Myna Croy.

Mainak Roy

Thanks. I’m audible right? Yeah. Okay. So I’ll also add a bit to the intro that the AI did. So I’m also a teacher. And that’s how I started my journey into this space. And one of the things that as a teacher, I’d always struggled with is figuring out what is the right classroom pedagogical strategy that I need to use for a certain sort of class. And that often changed with the topics or the objectives that I was covering on the day. Right. And for a teacher, it’s often difficult when I’m doing eight lessons a day to sharpen my sort of focus and ensure that I get the perfect strategy. And that really was our sort of problem statement in many ways that can we build a platform or a tool which enables teachers to actually say that this is the class I’m teaching today.

This is the kind of classroom that I need. This is the kind of classroom that I have. And then can the. AI throw out something which is evidence based and has worked in other similar classes before and support the teacher so that’s what we wanted to do and in India I think if I asked how many of you use WhatsApp probably all of you will put your hands up and that’s the case for teachers in some of the most remote areas of the country as well so we chose WhatsApp as a platform and we used a chatbot kind of a mechanism to solve that problem for our teachers

Manohar Sreekanth

ok thank you Manak for that intro so that’s Simple Education Foundation so they built this Simple Teacher Buddy we’ll come back to you a little later Stephen, Quest Alliance, so what was the problem you were trying to solve and high level overview of the solution

Steven Suting

thank you it’s happening no? I’m not sure if I can yeah it’s happening so we work so we work with young people from grade 8 to class 12 and then younger people who work in the TVET who study in the TVET ecosystem who are between about 16 to 25 right And what we realized in the work that we did, and this wasn’t necessarily revolutionary, was that no two learners are the same, right? And they’re different in the way they think, in the way they learn, in the confidence levels that they have. And that’s one dimension of difference. And when you think about the demographic difference, their opportunity of access, the gender inequalities, and if you put all of this together, you have a system that’s a classroom that’s fairly heterogeneous.

These are not homogeneous systems, right? But unfortunately, the educational structure around this is a chalk and talk mechanism. It largely treats all the students in the same way, right? And as a result of that, you’re not really meeting the learner where they are. And so that’s one dimension. The other side of this problem is also that there’s a lot of stress with the availability of teachers. Because in general, there’s about a 1. 30 ratio. between student and teacher. And that’s in a good scenario, right? So what tends to happen is these students don’t necessarily have access to the information that they need. You’re not really meeting the learners at where they are. And so what we try to do or what we’re trying to do is really try and place a system that is able to hear the learners’ problems with respect to the area of career development.

Manohar Sreekanth

Okay, so you’re directly targeting the learners, right? Whereas my next use case was targeting the teachers, right? So we have yet another education use case from Avanti Fellows. Preetam, tell us about it.

Pritam Sukumar

So hi, I’m Preetam from Avanti Fellows. And what Tamina said and what Stephen said kind of resonated because we struggled a lot to find a good use case for AI because we have a lot of pain points. Like our teachers have pain points, our students have pain points, our program managers have pain points, and they’re all spread across the country in remote locations. So we started with a use case to improve student report. So that because a lot of, I mean, we reach about 200 ,000 students and 98 % of those are online learners. And they don’t get useful feedback from the test report. They just get score and this and that. But whereas in our physical school, there is a teacher talking to them.

There is a mentor talking to them. So we thought we’ll improve the student report first, but that didn’t work out so well. So we, our mentor actually helped us a lot, Akilesh from Tech4Dev. And we iterated through a few use cases till we landed up on actually replacing the teacher -student mentor conversation with an AI summary that can be useful for our low -touch, I mean, our online students. So it kind of collects all the data of the student’s performance in the tests about how they’re doing, what, what, how their attendance is, what chapters they’re weak in, what chapters they’re strong in. And generates kind of a script. And for the teachers, the script is used to guide their conversation.

And for the online learners, the script can be used to like just give them a proper mentors, sort of mentorship guidance so that they have actionable things to do for their next test or whatever they’re doing.

Manohar Sreekanth

Right, now let’s dig a little deeper. So now where are we on this journey, Pritam? And you know, how was the solution deployed? How many people are using it? Is it being in the pilot phase? Can you tell us more?

Pritam Sukumar

It’s, I think it’s somewhere between the pilot and the rollout. So we, around 15 teachers I think have had 57 or 75, 57 to 75 conversations with students with these scripts. And we are working on feedback because there are hallucinations as LLMs do. But the hallucinations in this case are a little weird. The numbers are correct but instead of saying increased it says decreased. So we are like, there is a lot of prompt engineering that we are doing to get this sort of fixed. And in the low touch, in the online program it is being, once the rollout is kind of ready and yet we are just waiting to fix this, these small issues. Okay. Before rolling out.

Manohar Sreekanth

Okay, and what’s been the response from the teachers themselves who are getting these reports?

Pritam Sukumar

The teachers are very happy. I mean, it saves them a lot of time. Like in one of the main feedbacks, one of the main points of feedback that we got from the teacher is that it saves them really a lot of time because before such a conversation, they would have to look at the last four reports of the student. They would ask, I don’t know, what is happening at home? Like, are you getting device access and all this? Now all this, the students fill a form before the mentorship conversation and the teacher, we have the test data and we give this all in a script to the teacher. So that has been the positive side.

On the negative side, the hallucinations really bother the teachers because there isn’t 100 % trust yet in what we’ve given them. And that we are still working through. I mean, the solution is not clear because to me, maybe we need more AI engineers like Tamina said. But because it’s not clear exactly what to do with these hallucinations because to me and us, the LLMs are like a bug. Black box. You just put something, you get it. something out and you don’t know what change will bring the change that you want. Yeah, that’s where we are now.

Manohar Sreekanth

Right. Yeah, I guess the error rate, the hit ratio and what kind of an impact it has depends on the use case. And if the teachers can be trained to address some of this, it might be one way out. Mainak, you talked about WhatsApp being the channel, right? So that’s probably something which, you know, eases adoption because they don’t have to learn something new. It’s there. It’s part of the conversation. So now they’re getting some additional feedback and advice. But at the same time, and by the way, I know this because I was mentoring some of the folks here. So by the way, this whole program had a mentorship model where each of these nonprofits are hooked up with one or two mentors and those mentors guided the teams.

Right. So I happen to be one of the mentors for Simple Education. And we did face some interesting challenges also, even if it was simplified from the WhatsApp point of view. So what were some of those challenges on the ground? Right. Because when the rubber meets the road, that’s. when you see what I mean you would have imagined something every product person imagines and starts with a hypothesis but then needs to be validated right so what were some of those things that we learned

Mainak Roy

yeah I think there are a bunch of challenges but you know my favorite challenge is actually we wanted teachers to start the conversation with the bot with a simple hi but with the moment teachers would get the QR code and scan it they would start putting their problems in right and then the bot would glitch because the bot did not get that first hi and then the whole process of the data that we needed to collect to give the teacher very specific guidance was not happening so we ended up saying that whatever is the first conversation that the teacher does the bot sort of recognizes it as a hi and then starts the conversation so it although we’ve kept telling the teachers that started with a hi but it does not happen so we’ve sort of worked around it but yeah that’s been one of the challenges and then I think I think So we’ve seen a bit of also teachers figuring out what kind of questions they can ask.

So sometimes there’s been some straying away from the kind of questions we want the teachers to ask. And because it’s AI, they would still throw out some response. Right. So over there also, we’ve had to put some guardrails around what kind of conversations the chatbot will entertain. What are the conversations that we would completely sort of say no to. So, yeah. Right.

Manohar Sreekanth

Thank you, Stephen. In your case, I think you had this slightly more ambitious plan for behavior change. Right. You’re looking at nudging these learners towards a different kind of behavior. And you had a certain approach to that. Can you tell us a little bit about it? And how successful has it been? I

Steven Suting

think we’re still largely pie in the sky thinking right now. So we still hope to be ambitious through the entire course of the project and not be deterred by, how many users and, you know, do we have, how many sessions do we have? We’re trying to be steadfast with that. I think for us, the fundamental premise was that we didn’t want to create a system that was really an information dissemination system, right, because there’s enough and more of that. We wanted to create something that emulated a human being or emulated a teacher. And the intent over there was really to be able to pick up signals. If you imagine a conversation between a teacher and a student, there are multiple signals that a human being is able to pick up when they’re having a conversation, right?

Sometimes it can just be the tonality. It can be the speed at which they’re speaking. You can gauge confidence levels, right? But when you’re working with an interface like a bot, for example, you have limited access to the kind of signals that you can pick up. So then how do you decrypt conversation to be able to pick up these different signals? And then how do you use those signals to then create a response that intends to or is intended to emulate a human being? So we’re going to create a system that emulates a human being. right um and and interestingly so when we started thinking like that it stopped becoming a software problem it started becoming a behavior science problem right and so for us that was really the elevation of that situation and so right now we’ve got a plethora of like people who are specialists in that space we’ve got you know the engineering and software folks to help us with the scaffolding but the heart of the problem is really in understanding that human being right no

Manohar Sreekanth

it’s taking it really to the next level that’s so nice um so tamina you’ve now heard some of these challenges experiences and so on and i think you’ve worked with previous um cohorts as well i want to get a sense of also how these indian non -profits you’ve been working with and engaging with how does this differ from the others probably you’ve engaged with you know in the u .s and abroad in terms of adoption in terms of maturity in terms of the kind of use cases they’re looking at can you just give us a lay of the land comparing the two

Temina Madon

then going to learn it. It’s about my belief, my confidence in myself as a new mom to take care of my baby and know what caregiving I need to provide. It’s about my role as a farmer, feeling that I’m going to be able to experiment with a new input and feeling more confident or thinking maybe I’ll try livestock because now I have better information and support. So we call that the user evaluation. That requires survey data collection. Most companies don’t do many surveys. Maybe they buy some survey data from a firm, but all of these orgs, once they’re deploying to enough people, will have monitoring and evaluation systems that really try to understand the user.

And then the fourth motor we switch on, we call impact evaluation, and that’s when you have tens of thousands, hundreds of thousands of users, and you want to understand whether the product that they’re using is helping them make a change in their health outcomes, in their education or livelihood outcomes. So that’s what we call impact evaluation. And so we almost never see that fourth stage of evaluation in the private sector. That’s something we do in the social sector. But we think that it’s a nice way, this kind of four -level framework, to think about how to continuously learn from the product you’re deploying and then prove its impact to society and hopefully bring in more funding to scale.

Manohar Sreekanth

Right. No, thank you, Tamina. In fact, if those of you who are interested in that framework, it’s a very powerful, interesting framework. The Search for Agency Fund Evaluation Framework, and I’m sure you’ll find that. It’s a very useful resource. Right. So, in fact, this is one such topic. Evaluation is one such orthogonal topic. But there were a couple of others as well. So, Erika, also from a program structure point of view, right, you people had some knowledge partners come in, right? So it was not just these nonprofits hooked up with mentors, but you also had some others. Can you tell us a little bit more about those knowledge partners, their roles, and so on, and then we’ll get into how they impact.

the work they were doing.

Erica Arya

Thanks, Manohar. I think this is a very critical part of the entire program design. So Tech4Dev, while we are like a bunch of engineers, CTOs in the organization, but again, we know that the problem that we are trying to solve is not something that we can do alone. And what we try to do is, in whatever we do, we don’t want to reinvent the wheel. And that’s where and we know we don’t know it all. So we always work in our programs through collaborations and partnerships with people who have expertise in those areas so that we can get the best to the cohort. And so as part of this AI cohort too, we wanted, what was the overall goal of the program?

It was that the non -profits could unlock those barriers which was keeping them away from building and deploying. AI solutions to help them bring in more efficiency, help them scale and create more impact on the ground. But we wanted, as a technologist, we could say that, okay, we’re just building something, but are we building it in the right way? That’s what we wanted to do. And so we collaborated with two knowledge partners. One was Digital Future Labs, who brought in expertise around building, you know, on integrating responsible AI principles into your design as you are working on a use case. And the second knowledge partner was Statle, which works in AI safety. So we actually wanted that when the nonprofits are working as part of this cohort, they have that thinking of responsible AI and AI safety built into their design right from the get -go.

It’s not something that you need to think. Later, okay, let me just build something. And then I’ll come to, you know, responsible AI practices or think about AI safety, you know, a couple of after I’ve done my pilot or, you know, after a couple of months when actually I see things breaking. So we wanted to build that right from the get go. And that’s why we collaborated with partners because we knew we did not know this bit. We definitely contributed through, you know, what Manohar said. This was a mentorship program. So a lot of mentors came in from Project Tech for Dev because we knew that is one piece that we feel we do good at and collaborated with these partners.

And I hope that this helps the nonprofits in designing better solutions.

Manohar Sreekanth

Right. No, that is a very unique part of this program, actually, because also anyone who’s in software would know that quality and all these things will look later. Let’s build and get the functionality out. That’s a typical attitude. And in the context of AI, AI safety, responsible AI seems to be the most important thing. to be an afterthought, but here it was baked into it. So now let’s hear from some of the non -profits and how did this influence the way you approached, you know, the building out this and how did AI safety or responsible AI kind of thinking influence and how did it actually help on the ground itself? Stephen, would you like to say something?

Steven Suting

Sure. I think for us, it was supremely helpful, just primarily because, you know, the artifacts that these organizations helped put together in some ways were like a blueprint for us, right? And when you have a blueprint in front of you, you have a sense of what the different moving parts are. And what that allowed for us to do is to rapid prototype in a fairly modular way, which allowed us to fail fast, right? And if you were failing fast, we were learning faster, right? So yeah, I think. and I think what can typically happen in this space of human development because the dimensionality of problems can be fairly large and this phenomenon of analysis paralysis can happen and it can be overwhelming to decide what do you pick and choose because they all seem like really important problems but having frameworks like what Tamina was talking about helps you sort of chunk things into where is this problem best suited and where is it most relevant and then you solve for that in that section and that helps influence what you’re going to do in the next step and then you kind of shuttle back and forth till you’ve crossed a certain critical threshold of it making sense to the problem that you’re solving

Manohar Sreekanth

One of the things I remember from the AI safety part was what Tatl came up with with these plugins where they have created a slur list a list of words which the AI is not supposed to use and that’s something which they have crowdsourced and so on and there are also some guardrails plugins from Lama and so on and And the fact that those things can be plugged into your bot to make sure that it’s behaving the right way was, I thought, was an interesting thing. Did any one of you use any of those things? Not yet, Right. For the simple, yes. I mean, I remember the guardrail. So the bot which Minak was talking about.

So there you could ask any question. And then the whole thinking process behind making sure that even if a random question comes. So let’s say a teacher asks, what do I do? It’s not a question the bot is supposed to answer. But how does it respond to that? It’s something we started thinking about and baked into the whole design itself. And that was used for testing. Okay. Now, one of the unique features of this program was also the whole collaboration piece. Because. they are not working in isolation. So we had in -person workshops where each of them actually came and presented, and they had sessions where, you see, all three of them are actually working on education use cases, and there are some others as well.

So how could they learn from each other? That’s one. The other thing is, and I think I’m sure you’ll recognize, you probably are also looking at similar problems in this space. How do you not reinvent the wheel? How can you leverage some of the work that’s been done by others? So this collaboration aspect also came in in a very strong way. So I want to understand from some of you, so whom else did you speak to? How did this play out as a part of the program? How did this program structure influence the way you went about it? And how was it in comparison to, let’s say, you were going about it by yourself? What was the difference?

Anyone? Anyone?

Steven Suting

I think. One thing that was nice about the program design was that. by nature of the conversations that we were having, there was this sense of porousness that we had, that we weren’t holding our cards too close to our chest. It was okay for me to learn from what you were doing well and what you weren’t doing well. And if I had that conversation with you, it would elevate or it would sort of catalyze and expedite certain problems that we were trying to solve. For example, when we were thinking about how do you like if you have to do responsible, if you have to do classification, if you have to do behavior. So these are all different parts of a fairly large problem.

And we had a certain approach in mind. But through the cohort, we realized that people were trying to solve similar things, but they did it in a different way. And like, for example, we didn’t think about this through the genetic way. And so then when people started saying that this is what worked for us in that model of deployment, that in many ways kind of expedited the way we solved for that problem. And I think there are multiple examples like that. Also, the idea of responsible, for example, right? Like when you say responsible, it can be a whole gamut of what you mean by responsible. But what does responsible mean for a 25 -year -old versus, say, a grade 8 student?

It’s significantly different. Now, if you’re, you know, if you’re, let’s say, for example, an Avanti and you’re working with a certain cohort that is similar to your cohort, right? What are they doing with responsible that you want to do, right? And what is that conversation that we want to have? So I feel that was certainly helpful.

Manohar Sreekanth

Interesting. Erica?

Erica Arya

Yeah, I’ll just add in. So while these were like seven NGOs, out of which four were from the education and three are sitting here, there were two health NGOs also, and they were both working on building a predictive model to predict high -risk pregnancies. And they didn’t know about, like, you know, each one of them building something like that. So but when the cohort came together, they discussed. And then we felt, you know, the mentors also got together, and then that’s when they felt that we are building the same thing. So then are we happy to collaborate? And that’s when, you know, the future calls that happened was getting these two NGOs together and, you know, building on at least sharing and building it together was one thing.

Second, while we were working with these seven NGOs, one of them, which is not represented here, was working on an assessment model. How can we do assessments of, say, answer sheets which are coming from students because that takes in a lot of time of the teachers. And then we also, since we’ve been working with 200 nonprofits, we were also aware of two other nonprofits who were working in, you know, trying to work on similar solution. So just going beyond the cohort, we brought all three of them together, and we just discussed on, you know, the assessment models that they were building, the rubrics. And we were seeing that there was a lot of commonality. So could we build this together, learn from each other’s mistakes?

So I’m just trying to share here that the learning and sharing goes beyond the cohort also. And that is where, you know, platforms or organizations like us who are more ecosystem players have a role to play. So, yeah, I think these are like two such examples where I could see collaboration happening with just bringing a set of seven NGOs together in a room.

Manohar Sreekanth

Right. No, thank you, Erica. Those are very interesting and powerful examples. Also building on what Teminat said earlier, contrasting this and the corporate space. I’ve spent a couple of decades in the corporate side, and this kind of collaboration at an early stage of technology is very, very rare. You don’t see that on the for -profit side. And so it’s fascinating to see this kind of thing. And, of course, we are in a room where I’m sure we’re working on similar kind of problems. And just to exchange notes is so, so important. All right. Just now wrapping up, we have about six and a half minutes. I’d like to understand how many of you in the audience are from nonprofits.

Can you raise your hands? OK, quite a few. All right. So now we have. keen non -profit listeners. So the question to the non -profits and also probably others over here is what are some of the learnings that you came away with which surprised you or which you didn’t know of before which could be useful for the non -profits in the audience as well as they go about their AI journeys. If you could share some of your nuggets. Pritam, please.

Pritam Sukumar

So I think one thing is that one thing we learned is to stop looking at use cases for AI but look at pain points and troubles we already have and see if AI is a good fit there. It doesn’t have to be like AI is a new technology, let’s put it and build something amazing and innovative. There’s already a lot of trouble in the way even our non -profit runs. There’s a lot of inefficiency, there’s a lot of stuff so you just, there is a lot of stuff that AI can do to help daily operations like our fundraising team write grants or our programmers write code which of course is the number one use case right now.

But yeah, that is one thing we really believe, at least I really believe now.

Steven Suting

You know when AI started becoming really popular it almost became like knee jerk reactions for organizations to jump on that bandwagon and if you weren’t doing it then you couldn’t get into the right rooms with the right kind of funders and then you would be lost in that race right not always but more often than not there is that pressure and I think what has happened as a consequence of that is some part of our DNA was forgotten like I feel like civil society organizations have always been you know a species that has really always asked questions sometimes over indexed on always asking questions but then when software development became so much more easier to do AI became so much easier to do we started leading with solutions and we started deploying solutions and then and this is true for our space as well that when we go to a classroom there will be five different NGOs who are solving the same problem and there’s that learner who we’re all saying that we’re trying to work for the learner we’re putting the learner at the center of design but there are five not -for -profits who are all trying to solve the same problem for that learner without recognizing what we’re doing to that learner.

And we need to have that conversation, right? So sometimes I think it’s important to just take a moment and have that conversation or do that assessment before you start leading to solutions.

Manohar Sreekanth

Yeah, it reminds me of multiple people building an app for ASHA workers and they have to interface with multiple apps themselves, right? Yes, Roy?

Mainak Roy

Yeah, no, I think just sort of definitely plus one into what both of them have mentioned, I think don’t build what is sexy, build what is needed. And I think we had a very different use case to start off with. And I think we ended up building something very different because we recognized that that was actually the need of the hour. So I think that’s definitely one thing that I would say. And the second piece is I feel like… especially when we are building for the non -profit space in India, scale becomes a very important metric that we want to reach out to like thousands and hundreds and thousands of teachers or students or whoever.

And I think in that whole space, we miss out on some fundamentals like responsibility, that responsible AI that we were talking about. So I think how do we build that? And that’s something that I think I have learned through the process. There’s something that we’ve been working on now called the golden set to ensure that the responses the AI is giving is actually more aligned to what we actually want the AI to give. And I think those pieces are things that if we fix early on, then I think it’s just easier. And that’s something that I think we’ve learned through this process.

Manohar Sreekanth

Great. Thank you, Manak. I mean, the golden data set you talked about is part of an evaluation framework, also something which is very strongly, emphasized in the context of the program. We have a couple of minutes. We’d like to open up for questions, but before that, any last words? Erica, Tamina?

Erica Arya

Y es. So, totally resonate with what all my three panelists said. So, don’t just get on to using technology because everyone else is doing it. Only if it really meets your needs, that is when you think about technology. And when you think about technology, please look, there might be already solutions existing, so don’t jump on to building custom solutions because I think what you are working on is also something many others are working on. And that will just help you save time, effort, and money. So, maybe just, even if an existing solution, I’m just saying, like an open source platform, is meeting 60 -70 % of your needs, just start with that. And then you can improvise it and then decide on whether I want to use it or not.

want to build up something which is custom. So yeah, those are like my thoughts.

Manohar Sreekanth

It’s a fantastic note to end on. Thank you. Thank you for the panel. Big hand to them. Thank you. We have one minute and three seconds. Questions? I have a few hands up. Like a Gemini gem or a Claude project. Did you all evaluate that? Curious to know the journey of that build versus buy or just adopt for any one of them. Yeah.

Erica Arya

Yeah, I can just add and maybe, you know, as you are using it. So like when we were talking about out say a chatbot, right? So it’s not, it’s like those models, like when we are using, we built the Glyphic chatbot. So we’ve obviously integrated it with open AI. So it’s not that you are reinventing anything, but, and even we are building an AI platform. Now, even in that platform, what are we doing? We are integrating different open source elements which are there. So we ourselves as an organization, even if we’re building something new, we are picking up things which are already existing stitching it together and then adding layers when we speak to our NGOs of, you know, what is missing in it and then just build that part of it.

So it is always there in our working that we integrate open source solutions that are existing and not built. Even if it is like for dashboards, we are like, we have a data management platform, but we’ve not been creating dashboards on it. We’ve integrated it with Superset, which is another open source solution. I’m just, I know you may not know these names, but what I want to call out. here is that even as a tech organization who’s building platforms we’re not building things from scratch we are integrating with other tools you know which are meeting the needs even when we are building our own platforms

Manohar Sreekanth

thank you actually it’s 0 -0 but we’ll stay back if you have questions we’ll have to you know have a conversation with you but thank you for your time it was great to see a big audience also engaged hope it was useful and do reach out to anyone of us if you want to find out more about the program there’s a lot of documents which have been written and published as blogs as well from project tech for dev and some of the non -profits so happy to be engaging with you also going forward in this journey thank you so much and thank you to the panel Thank you. Thank you. you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (35)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Moderator Manohar Sreekanth asked Elizabeth to speak louder and assist with introductions after a technical glitch muted the AI assistant.”

The transcript shows the moderator addressing Elizabeth and mentioning a digital assistant, matching the report’s description [S2].

Confirmedhigh

“Tamina Madon was presented as the co‑founder of the Agency Fund, working at the intersection of technology, philanthropy and economic mobility.”

Knowledge base entries list Tamina Madon as co-founder of the Agency Fund and describe her work at the intersection of technology, philanthropy and economic mobility [S1] and [S17].

Additional Contextmedium

“Manohar’s role as moderator included introducing speakers and keeping the discussion orderly.”

Multiple sources describe the moderator’s function as introducing participants and managing the session flow, providing additional context to the report’s summary [S89] and [S90].

External Sources (96)
S1
How nonprofits are using AI-based innovations to scale their impact — – Erica Arya- Steven Suting – Temina Madon- Steven Suting
S2
How nonprofits are using AI-based innovations to scale their impact — Speakers:Erica Arya, Steven Suting Speakers:Erica Arya, Steven Suting, Mainak Roy, Manohar Sreekanth Speakers:Erica Ar…
S3
How nonprofits are using AI-based innovations to scale their impact — -Temina Madon- Co-founder of the Agency Fund, works at the intersection of technology, philanthropy and economic mobilit…
S4
How nonprofits are using AI-based innovations to scale their impact — Speakers:Erica Arya, Temina Madon Speakers:Erica Arya, Temina Madon, Steven Suting Speakers:Temina Madon, Erica Arya …
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — So, introducing Tamina Madon. She’s the co -founder of the agency fund. Tamina works right where technology, philanthrop…
S6
How nonprofits are using AI-based innovations to scale their impact — – Erica Arya- Steven Suting- Mainak Roy- Manohar Sreekanth
S7
How nonprofits are using AI-based innovations to scale their impact — All right. No, right? Can’t hear. Let’s try again. And I can hear you. Usually this should work. Excellent, Elizabeth. …
S8
How nonprofits are using AI-based innovations to scale their impact — – Erica Arya- Pritam Sukumar – Erica Arya- Steven Suting- Mainak Roy- Manohar Sreekanth – Erica Arya- Temina Madon- St…
S9
How nonprofits are using AI-based innovations to scale their impact — And he is Preetam Sukumar. He’s the technology and research leader of Antifellows. Preetam drives tech and research effo…
S10
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — And he is Preetam Sukumar. He’s the technology and research leader of Antifellows. Preetam drives tech and research effo…
S11
How nonprofits are using AI-based innovations to scale their impact — – Mainak Roy- Steven Suting- Pritam Sukumar – Steven Suting- Pritam Sukumar- Mainak Roy
S12
Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di — – Elizabeth Lockwood: Representative of the UN Stakeholder Group of Persons with Disabilities for Sustainable Developmen…
S13
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Elizabeth Orembo: Thanks, Paloma. And I’m going to be unfair by not giving you a chance to have a parting shot. If I was…
S14
WS #226 Strengthening Multistakeholder Participation — Anne Rachel Inne: Good morning and welcome everybody. My name is An-Rochelle Iné, and I’m the Director General of the Na…
S15
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — This comment is insightful because it acknowledges that responsible AI development cannot be achieved by any single comp…
S16
Digital Public Goods and the Challenges with Discoverability | IGF 2023 — Cynthia Lo:And yeah, so some policies to consider to help improve this discoverability of the tools. We have here a coup…
S17
https://app.faicon.ai/ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — All right. No, right? Can’t hear. Let’s try again. And I can hear you. Usually this should work. Can you raise your han…
S18
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — Naa Densua Aryeetey:Thank you. Thank you, Shamika. And thank you, Angtad, once again, for inviting me to be part of this…
S19
Education meets AI — Additionally, the speakers emphasized the need for personalized learning and adaptive teaching methods. They discussed t…
S20
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — This comment is deeply insightful because it reframes the AI governance challenge from a regulatory compliance issue to …
S21
How to make AI governance fit for purpose? — This comment elevated the discussion to a more philosophical level, moving beyond technical regulatory approaches to con…
S22
RESEARCH PAPERS — developing countries opened up by the adoption of ICTs and destroy the potential for increased access to knowledge. The…
S23
© 2019, United Nations — On the other hand, the potential benefits are far from automatic, and there are major challenges, costs and ris…
S24
AN INTRODUCTION TO — In increasingly integrated world, it is difficult to maintain the traditional architecture of international policy consi…
S25
Research Publication No. 2014-7 March 17, 2014 — Conceptual challenges describe basic ‘horizontal’ challenges legislators and regulators are confronted with when conside…
S26
Can AI replace therapists? — With mental health waitlists at record highs and many struggling to access affordable therapy, someare turningto AI chat…
S27
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Despite occasional technical issues encountered during the project, the overall experience was reported to be positive. …
S28
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S29
Presentation of outcomes to the plenary — This aligns with SDGs 13 and 14, which call for climate action and the conservation of marine life. Overall, the compreh…
S30
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S31
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — High level of consensus with constructive disagreements mainly on implementation details rather than fundamental princip…
S32
Science as a Growth Engine: Navigating the Funding and Translation Challenge — Low to moderate disagreement level. The speakers largely align on core issues like the importance of long-term investmen…
S33
Creating Eco-friendly Policy System for Emerging Technology — Moreover, progress in AI necessitates engagement from principal stakeholders such as innovators, policymakers, and indus…
S34
Embedding Human Rights in AI Standards: From Principles to Practice — ### Multi-Stakeholder Collaboration – **Early Engagement**: The challenge of reaching scientists and developers at the …
S35
From principles to practice: Governing advanced AI in action — ## Academic Research Perspective ## Introduction and Context Sasha Rubel: be possible tomorrow. I love that call to ac…
S36
AI and international peace and security: Key issues and relevance for Geneva — Capacity Building and Information Exchange:Supporting education and regional dialogue to bridge technological divides an…
S37
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The moderator establishes that as India advances in its digital journey with AI as a powerful engine for innovation and …
S38
How nonprofits are using AI-based innovations to scale their impact — This level of early-stage collaboration proved remarkable when contrasted with typical corporate behaviour. As Manohar o…
S39
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Marco Zennaro emphasized the importance of building local expertise in embedded AI while leveraging existing open educat…
S40
How nonprofits are using AI-based innovations to scale their impact — “people were trying to solve similar things, but they did it in a different way.”[23]. Collaboration and Knowledge Shar…
S41
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to s…
S42
Advancing Scientific AI with Safety Ethics and Responsibility — Summary:The speakers demonstrated strong consensus on several key areas: the need for context-specific governance framew…
S43
Advancing Scientific AI with Safety Ethics and Responsibility — The speakers demonstrated strong consensus on several key areas: the need for context-specific governance frameworks tai…
S44
Secure Finance Risk-Based AI Policy for the Banking Sector — Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life …
S45
Exploring Digital Transformation for Economic Empowerment in Africa: Opportunities, Challenges, and Policy Priorities (International Trade and Research Centre, Nigeria) — Currently, there is a lack of metrics to evaluate the impact of policies in the policy space. It is important to develop…
S46
Safeguarding Children with Responsible AI — Impact:This evidence-based intervention shifted the discussion toward the need for independent evaluation and real-world…
S47
Safeguarding Children with Responsible AI — This evidence-based intervention shifted the discussion toward the need for independent evaluation and real-world testin…
S48
Driving Enterprise Impact Through Scalable AI Adoption — From a labor market perspective, I think there’s a good case to be concerned about the impact of AI and what might happe…
S49
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S50
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Explanation:Both policy-oriented speakers surprisingly advocate for strong regulatory intervention rather than market-dr…
S51
The sTaTe of The — This development correlates with the inception of the expanded CERF and other new pooled funding mechanisms. Evaluations…
S52
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 14 CEB, HLCM HR Network, doc. CEB/2003/HLCM/17 of 2 July 2003. 38. The absence of a system-wide vision of the strategic…
S53
How nonprofits are using AI-based innovations to scale their impact — Cohort model enables deep learning and peer collaboration among nonprofits
S54
How nonprofits are using AI-based innovations to scale their impact — Erica describes the structured approach of the program, which included careful screening of participants, ensuring organ…
S55
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — and it was called AI for Global Development, we felt that maybe while agency fund program was working more with the nonp…
S56
Executive Board — 11. An emerging issue is how to best mobilize scientific networks and leverage UNESCO’s own role within them. Virtual…
S57
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But teachers need support. They need professional development around AI literacy, reasonable class sizes that allow for …
S58
Can AI replace therapists? — With mental health waitlists at record highs and many struggling to access affordable therapy, someare turningto AI chat…
S59
https://app.faicon.ai/ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — Sometimes it can just be the tonality. It can be the speed at which they’re speaking. You can gauge confidence levels, r…
S60
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S61
AI revolutionises academic writing, prompting debate over quality and bias — In a groundbreaking shift for the academic world, AI now contributes to at least 10% of research papers, soaring to 20% …
S62
Panel Discussion Next Generation of Techies _ India AI Impact Summit — Small startups embed safety guardrails internally
S63
Transforming Health Systems with AI From Lab to Last Mile — Summary:Both speakers emphasize moving beyond hype toward careful, thoughtful implementation – Mundel warns that rushing…
S64
Keynote-Rishad Premji — “And they are the pioneers and the thought leaders of artificial intelligence.”[13] Artificial intelligence Opening fr…
S65
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S66
Ethical AI_ Keeping Humanity in the Loop While Innovating — The discussion maintained an energetic and collaborative tone throughout, with panelists frequently building on each oth…
S67
AI Algorithms and the Future of Global Diplomacy — The tone was professional and collaborative throughout, with participants demonstrating mutual respect and shared intere…
S68
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — The discussion maintained a consistently optimistic and solution-oriented tone throughout. While acknowledging the serio…
S69
Non-regulatory approaches to the digital public debate | IGF 2023 Open Forum #139 — Juan Carlos Lara:The mic is open, guys. What? The mic is open. It’s a hot mic. I think it is time to start. So it is now…
S70
Empowering People with Digital Public Infrastructure — Brendan Vaughan: Welcome. My name is Brendan Vaughn. I am the editor-in-chief of Fast Company. Fast Company, for those…
S71
Global AI Governance: Reimagining IGF’s Role &amp; Impact — – **Elizabeth Orembo** – Research Fellow, Research ICT Africa; Moderator of the panel Elizabeth Orembo concluded the se…
S72
Executive summary — The first report, in 2008, found a significant gap between the potential of ICT and what parliaments had actually accomp…
S73
https://app.faicon.ai/ai-impact-summit-2026/skilling-and-education-in-ai — To Rakesh, historically, you know, whenever general purpose huge technological change comes in, it ends up increasing di…
S74
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S75
Agenda item 6: other matters/OEWG 2025 — The overall tone was constructive and diplomatic, with most delegations expressing willingness to compromise and find co…
S76
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S77
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S78
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S79
Artificial General Intelligence and the Future of Responsible Governance — The discussion maintained a serious, analytical tone throughout, characterized by cautious optimism mixed with genuine c…
S80
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S81
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S82
Responsible AI for Children Safe Playful and Empowering Learning — The discussion maintained a consistently thoughtful and cautious tone throughout, with speakers demonstrating both excit…
S83
Information Society in Times of Risk — The discussion maintained a consistently academic and collaborative tone throughout. It was professional and research-fo…
S84
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S85
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S86
WS #460 Building Digital Policy for Sustainable E Waste Management — The discussion maintained a professional, collaborative, and solution-oriented tone throughout. Speakers were constructi…
S87
Closing Ceremony — The overall tone was positive and forward-looking. Speakers expressed gratitude to the hosts and participants, emphasize…
S88
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The moderator introduces himself at the start of the session, establishing his presence for the audience.
S89
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The moderator opens, transitions, and closes the session, guaranteeing that speakers are introduced, the discussion proc…
S90
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Moderator: Role not specified in detail, appears to be the session moderator who introduced the panelists and managed t…
S91
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — The moderator ensured that each participant identified themselves before speaking, keeping the question‑and‑answer sessi…
S92
IGF Parliamentary track – Session 2 — The moderator introduced the session, emphasizing its importance within the IGF framework and noting the diverse linguis…
S93
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Moderator: Session moderator who introduced speakers and managed the event flow.
S94
High-Level Track Inaugural Leaders TalkX: Forging partnerships for purpose: advancing the digital for development landscape — – **Participant** – Announcer/Host (introduced the event and participants)
S95
Redrawing the Geography of Jobs / Davos 2025 — David Bach: Thank you for that. So jobs in one place, a youthful population in another. You mentioned Africa already….
S96
5th ‘Road to Bern via Geneva’ dialogue: On data and Tech4Good — Scarionifeels that Global South partners are crucial players in this process. These include NGOs, universities in the Gl…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
E
Erica Arya
3 arguments160 words per minute1921 words717 seconds
Argument 1
Cohort model enables deep learning, peer sharing, and iterative platform improvement (Erica Arya)
EXPLANATION
Erica explains that cohort‑based programs allow the organizers to learn deeply from a small set of NGOs, fostering peer learning and sharing of experiences. This iterative feedback loop informs and improves the open‑source platforms they develop for the social sector.
EVIDENCE
She notes that working with over 200 nonprofits showed the value of deep-rooted cohort programs, which enable learning from participants and feed insights back into platform development, while also creating peer learning and collaboration opportunities among NGOs [70-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources note that cohort-based programs foster peer learning and reduce isolation among NGOs, as highlighted in [S1] and [S2].
MAJOR DISCUSSION POINT
Benefits of cohort learning
AGREED WITH
Temina Madon, Manohar Sreekanth, Steven Suting
Argument 2
Knowledge partners (Digital Future Labs, Statle) embed responsible AI principles and safety considerations from the outset (Erica Arya)
EXPLANATION
Erica describes how the program partnered with Digital Future Labs and Statle to integrate responsible AI and AI safety into project designs from the beginning. This ensures that ethical considerations are not an afterthought but built into the solutions.
EVIDENCE
She states that the two knowledge partners brought expertise on responsible AI design and AI safety, which were incorporated into the cohort’s workflow to guide NGOs in building ethically sound AI solutions [300-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI integration is emphasized as requiring multi-stakeholder collaboration and governance, discussed in [S15].
MAJOR DISCUSSION POINT
Embedding responsible AI via partners
AGREED WITH
Manohar Sreekanth, Steven Suting
DISAGREED WITH
Steven Suting, Manohar Sreekanth
Argument 3
Prioritize existing open‑source platforms that meet most needs before investing in custom development (Erica Arya)
EXPLANATION
Erica advises NGOs to first assess whether open‑source tools already satisfy 60‑70 % of their requirements, and only then consider customizing or building new solutions. This approach saves time, effort, and financial resources.
EVIDENCE
She emphasizes that leveraging existing open-source platforms can meet the majority of needs and recommends starting with them before developing bespoke technology, highlighting the benefits of reusing existing solutions [426-433].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to first assess open-source tools before custom builds is supported by observations in [S1] and [S2].
MAJOR DISCUSSION POINT
Leverage open‑source before custom builds
AGREED WITH
Steven Suting, Pritam Sukumar
DISAGREED WITH
Mainak Roy, Steven Suting
T
Temina Madon
3 arguments174 words per minute805 words276 seconds
Argument 1
Funding a cohort allows pooling of scarce technical resources (e.g., part‑time AI engineers, product managers) to serve multiple NGOs efficiently (Temina Madon)
EXPLANATION
Temina explains that by funding a cohort rather than individual NGOs, the Agency Fund can create a shared pool of technical staff and product managers. This model lets several NGOs access expertise they could not afford on their own.
EVIDENCE
She describes assembling a pool of ten technical staff and product managers who work alongside funded NGOs, noting that many NGOs cannot afford full-time AI engineers and benefit from part-time support [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cohort funding is described as a way to unlock barriers and provide shared technical support for nonprofits, as mentioned in [S5] and reinforced by programme design insights in [S1].
MAJOR DISCUSSION POINT
Resource pooling through cohort funding
DISAGREED WITH
Erica Arya
Argument 2
A four‑level evaluation framework (user, monitoring, impact, etc.) is rarely seen in the private sector but essential for social‑impact projects (Temina Madon)
EXPLANATION
Temina outlines a four‑stage evaluation approach that moves from user evaluation through monitoring to impact assessment. She argues that this systematic evaluation is common in the social sector but uncommon in private enterprises.
EVIDENCE
She details the stages-user evaluation, monitoring and evaluation, and impact evaluation-highlighting their importance for measuring outcomes at scale, which she notes is seldom applied in the private sector [269-279].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-level evaluation framework (user experience, behavior, evaluation, impact) is detailed in [S1] and [S2].
MAJOR DISCUSSION POINT
Four‑level evaluation framework
AGREED WITH
Mainak Roy, Steven Suting
Argument 3
Providing a shared pool of product managers and engineers addresses the chronic skill gap in NGOs (Temina Madon)
EXPLANATION
Temina points out that many NGOs lack dedicated product management and engineering talent. By offering a shared pool of such professionals, the cohort model bridges this skill gap and enhances project execution.
EVIDENCE
She mentions that the pool includes product managers and technical staff, emphasizing that product management roles are rare in NGOs and that this shared resource model helps fill that void [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adding product-management expertise and engineering talent to NGOs as a way to bridge skill gaps is highlighted in [S1].
MAJOR DISCUSSION POINT
Skill gap mitigation via shared talent pool
M
Manohar Sreekanth
2 arguments160 words per minute2355 words882 seconds
Argument 1
Structured mentorship, open applications, and leadership buy‑in ensure commitment and relevance of the program (Manohar Sreekanth)
EXPLANATION
Manohar stresses that the AI cohort’s design—open calls, careful screening, and securing leadership buy‑in—creates accountability and relevance for participating NGOs. Mentorship further supports NGOs throughout the four‑month pilot.
MAJOR DISCUSSION POINT
Program structure for commitment
AGREED WITH
Erica Arya, Temina Madon, Steven Suting
Argument 2
Mentors provide technical guidance, help navigate barriers, and reinforce responsible AI practices (Manohar Sreekanth)
EXPLANATION
Manohar notes that mentors were paired with each nonprofit to offer technical assistance, help overcome resource constraints, and ensure responsible AI considerations were integrated into solutions.
EVIDENCE
He references the mentorship model where each nonprofit was linked with one or two mentors, and mentions his own role as a mentor for Simple Education, highlighting the support provided [236-239].
MAJOR DISCUSSION POINT
Mentorship role in cohort
AGREED WITH
Erica Arya, Steven Suting
S
Steven Suting
4 arguments182 words per minute1399 words461 seconds
Argument 1
Quest Alliance builds AI‑driven, personalized learning pathways to address heterogeneous learner needs and teacher shortages (Steven Suting)
EXPLANATION
Steven describes how Quest Alliance uses AI to create individualized learning experiences that cater to diverse student backgrounds and compensate for limited teacher availability. The solution aims to move beyond the one‑size‑fits‑all “chalk‑and‑talk” model.
EVIDENCE
He explains that learners differ in cognition, confidence, and access, and that teacher-student ratios are often poor, prompting the need for AI-powered personalized pathways [165-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Quest Alliance’s AI-driven personalized learning solution is described in [S1] and the broader need for adaptive teaching is emphasized in [S19].
MAJOR DISCUSSION POINT
Personalized AI learning for education
DISAGREED WITH
Erica Arya, Mainak Roy
Argument 2
Designing AI for behavior change shifts the problem from pure software to a behavior‑science challenge, highlighting the need for responsible AI safeguards (Steven Suting)
EXPLANATION
Steven argues that moving from delivering information to influencing behavior requires understanding human signals, turning the project into a behavior‑science problem. Consequently, responsible AI safeguards become essential to ensure ethical interactions.
EVIDENCE
He notes that the focus shifted to emulating human teachers, detecting signals like tonality and confidence, and that this reframed the work as a behavior-science challenge requiring responsible AI considerations [258-266].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to a behavior-science focus and the need for responsible AI safeguards are discussed in [S2] and the broader responsible AI context in [S15].
MAJOR DISCUSSION POINT
Behavior‑science focus in AI education
Argument 3
Cross‑NGO collaboration within the cohort prevents duplicated effort and enables shared model development (Steven Suting)
EXPLANATION
Steven highlights that the cohort’s collaborative environment allowed NGOs to learn from each other’s approaches, avoid reinventing solutions, and co‑develop models. This sharing accelerated problem‑solving across participants.
EVIDENCE
He describes how conversations within the cohort revealed similar challenges, leading to shared solutions and faster progress, citing examples of learning from peers and adapting approaches [350-359].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cohort collaboration that enables peer learning and avoids duplicated effort is noted in [S1] and [S2].
MAJOR DISCUSSION POINT
Collaboration to avoid duplication
AGREED WITH
Erica Arya, Manohar Sreekanth
Argument 4
Conduct a thorough needs assessment and avoid a solution‑first mindset to ensure alignment with beneficiary outcomes (Steven Suting)
EXPLANATION
Steven stresses the importance of first understanding the specific needs of learners and contexts before designing AI solutions, to prevent misaligned or redundant interventions. This approach ensures that technology serves real educational outcomes.
EVIDENCE
He discusses the varied needs of different age groups and contexts, emphasizing that responsible AI must be tailored to each audience, and that assessing needs before building is crucial [360-366].
MAJOR DISCUSSION POINT
Needs‑first approach in AI design
AGREED WITH
Pritam Sukumar, Mainak Roy
P
Pritam Sukumar
3 arguments186 words per minute774 words249 seconds
Argument 1
Avanti Fellows generates AI‑summarized student‑report scripts that guide mentor‑teacher conversations and support low‑touch online learners (Pritam Sukumar)
EXPLANATION
Pritam explains that Avanti Fellows built an AI system that aggregates student performance data and produces concise scripts for teachers and online learners. These scripts help structure mentorship conversations and provide actionable feedback.
EVIDENCE
He details the workflow: the AI collects test scores, attendance, and topic strengths, then generates a script used by teachers to guide discussions and by online learners for mentorship guidance [185-200].
MAJOR DISCUSSION POINT
AI‑generated student report scripts
Argument 2
Large language model hallucinations (e.g., incorrect trend statements) undermine trust and demand prompt‑engineering and “golden” data sets (Pritam Sukumar)
EXPLANATION
Pritam reports that the AI sometimes produces factual errors, such as stating a decreasing trend when data shows an increase, which erodes teacher confidence. To address this, they are refining prompts and creating a curated “golden” dataset for validation.
EVIDENCE
He notes that hallucinations appear as wrong trend statements despite correct numbers, leading to prompt-engineering efforts and the development of a “golden” data set to improve accuracy [209-214].
MAJOR DISCUSSION POINT
LLM hallucinations and mitigation
Argument 3
Begin with concrete pain points rather than AI hype; assess whether AI truly adds value (Pritam Sukumar)
EXPLANATION
Pritam advises NGOs to start by identifying operational pain points and then evaluate if AI can address them, rather than pursuing AI for its own sake. This pragmatic approach ensures resources are directed toward genuine impact.
EVIDENCE
He emphasizes shifting focus from AI hype to real challenges, noting that AI can aid daily operations like grant writing or coding, but only when aligned with actual needs [400-404].
MAJOR DISCUSSION POINT
Pain‑point driven AI adoption
AGREED WITH
Steven Suting, Mainak Roy
M
Mainak Roy
3 arguments170 words per minute729 words256 seconds
Argument 1
Simple Education’s WhatsApp chatbot assists teachers in selecting evidence‑based pedagogical strategies for each class (Mainak Roy)
EXPLANATION
Mainak describes a WhatsApp‑based chatbot that helps teachers quickly obtain evidence‑based teaching strategies tailored to specific classroom contexts. By using a familiar platform, the tool reduces the cognitive load on teachers handling multiple classes.
EVIDENCE
He outlines the problem of teachers needing the right pedagogical approach for each lesson and explains that the solution uses WhatsApp as a chatbot to deliver evidence-based recommendations [155-163].
MAJOR DISCUSSION POINT
WhatsApp chatbot for teacher support
Argument 2
Initial bot interaction (“hi” greeting) failed, requiring workaround and strict conversation guardrails (Mainak Roy)
EXPLANATION
Mainak recounts that the chatbot did not recognize the initial “hi” greeting, causing the data collection flow to break. The team introduced a workaround to treat any first message as a greeting and added guardrails to limit permissible conversations.
EVIDENCE
He explains the failure of the bot to process the initial greeting, the subsequent workaround to treat the first message as “hi”, and the implementation of conversation guardrails to prevent off-topic queries [241-246].
MAJOR DISCUSSION POINT
Chatbot greeting bug and guardrails
Argument 3
Build solutions that are needed and scalable, not just “sexy” technologies; incorporate evaluation mechanisms early (Mainak Roy)
EXPLANATION
Mainak stresses that NGOs should prioritize building tools that address real, scalable needs rather than chasing trendy technologies. Early incorporation of evaluation frameworks, such as a “golden” data set, helps ensure solution relevance and quality.
EVIDENCE
He notes the importance of building what is needed, scaling to thousands of users, and mentions the development of a “golden” data set to align AI outputs with expectations [410-418].
MAJOR DISCUSSION POINT
Need‑driven, scalable AI development
AGREED WITH
Steven Suting, Pritam Sukumar
E
Elizabeth
1 argument163 words per minute305 words111 seconds
Argument 1
AI can be used for administrative facilitation such as introducing panelists, though practical limitations remain (Elizabeth)
EXPLANATION
Elizabeth attempts to introduce panel members using AI, demonstrating that AI can handle routine administrative tasks. However, technical issues like volume and audio clarity reveal current practical constraints.
EVIDENCE
She tries to speak louder after a request, attempts to introduce the panel, but the moderator notes difficulty hearing her, highlighting the limitations of AI-driven facilitation [5-7] and [10-13].
MAJOR DISCUSSION POINT
AI for admin tasks with limitations
Agreements
Agreement Points
Cohort model provides deep learning, peer sharing, and efficient resource pooling for NGOs
Speakers: Erica Arya, Temina Madon, Manohar Sreekanth, Steven Suting
Cohort model enables deep learning, peer sharing, and iterative platform improvement (Erica Arya) Funding a cohort allows pooling of scarce technical resources (Temina Madon) Structured mentorship, open applications, and leadership buy‑in ensure commitment and relevance of the program (Manohar Sreekanth) Cross‑NGO collaboration within the cohort prevents duplicated effort and enables shared model development (Steven Suting)
All speakers highlighted that the cohort approach lets a small set of NGOs learn deeply from each other, share resources such as technical staff, and benefit from mentorship and collaborative problem-solving, leading to more effective AI projects [70-74][124-129][236-239][350-359].
POLICY CONTEXT (KNOWLEDGE BASE)
The value of cohort-style collaboration and pooled resources is reflected in NGO-focused AI initiatives that emphasize early-stage knowledge sharing and shared staffing, as highlighted in case studies of nonprofit AI collaborations [S38] and calls for open prototype sharing to avoid duplication [S40]. Pooled funding mechanisms such as the UN CERF also illustrate policy support for shared technical staff across organizations [S51].
Embedding responsible AI principles from the outset through partnerships and mentorship
Speakers: Erica Arya, Manohar Sreekanth, Steven Suting
Knowledge partners (Digital Future Labs, Statle) embed responsible AI principles and safety considerations from the outset (Erica Arya) Mentors provide technical guidance, help navigate barriers, and reinforce responsible AI practices (Manohar Sreekanth) Designing AI for behavior change shifts the problem to a behavior‑science challenge, highlighting the need for responsible AI safeguards (Steven Suting)
Erica described partnering with experts to integrate responsible AI, Manohar noted mentors ensuring ethical practices, and Steven emphasized that behavior-change AI requires safeguards, showing consensus on early responsible-AI integration [300-307][236-239][258-266].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding responsible AI early aligns with multi-stakeholder standards that stress early engagement of developers and scientists [S34] and with governance frameworks that require pre-deployment safety assessments and embedded accountability throughout the AI lifecycle [S42][S44]. International capacity-building guidelines also stress collaborative stewardship of AI ethics from the start [S36].
Prioritize existing open‑source solutions before building custom AI tools
Speakers: Erica Arya, Steven Suting, Pritam Sukumar
Prioritize existing open‑source platforms that meet most needs before investing in custom development (Erica Arya) Cross‑NGO collaboration avoids reinventing solutions and encourages use of existing tools (Steven Suting) Begin with concrete pain points rather than AI hype; assess whether AI truly adds value (Pritam Sukumar)
Erica urged NGOs to start with open-source platforms, Steven highlighted avoiding duplication by leveraging existing solutions, and Pritam stressed evaluating real needs before custom builds, reflecting a shared pragmatic stance [426-433][350-359][400-404].
POLICY CONTEXT (KNOWLEDGE BASE)
Advocacy for leveraging open-source and existing curricula to avoid duplication is documented in nonprofit AI collaborations that promote using proven open educational resources and shared prototypes [S39][S40]. This approach is also recommended as a means to accelerate learning while conserving limited resources [S38].
Adopt a needs‑first, pain‑point driven approach to AI adoption
Speakers: Steven Suting, Pritam Sukumar, Mainak Roy
Conduct a thorough needs assessment and avoid a solution‑first mindset to ensure alignment with beneficiary outcomes (Steven Suting) Begin with concrete pain points rather than AI hype; assess whether AI truly adds value (Pritam Sukumar) Build solutions that are needed and scalable, not just “sexy” technologies; incorporate evaluation mechanisms early (Mainak Roy)
All three emphasized starting from the problem rather than the technology, ensuring AI addresses real challenges and scales appropriately [360-366][400-404][410-418].
Collaboration across NGOs accelerates learning and avoids duplicated effort
Speakers: Steven Suting, Erica Arya, Manohar Sreekanth
Cross‑NGO collaboration within the cohort prevents duplicated effort and enables shared model development (Steven Suting) Collaboration beyond the cohort helps NGOs share solutions and build together (Erica Arya) Collaboration aspect of the program allowed NGOs to learn from each other rather than working in isolation (Manohar Sreekanth)
Steven, Erica, and Manohar all pointed out that the cohort’s collaborative environment fostered peer learning, shared solutions, and reduced redundancy [350-359][368-382][338-343].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports on nonprofit AI adoption emphasize that cross-NGO collaboration reduces reinventing solutions and speeds knowledge transfer, as seen in early-stage partnership case studies [S38] and explicit calls for open sharing of prototypes and lessons learned [S40].
Systematic evaluation frameworks are essential for impact measurement
Speakers: Temina Madon, Mainak Roy, Steven Suting
A four‑level evaluation framework (user, monitoring, impact, etc.) is rarely seen in the private sector but essential for social‑impact projects (Temina Madon) The “golden” data set ensures AI outputs align with expectations and supports evaluation (Mainak Roy) Early incorporation of evaluation mechanisms helps maintain solution quality (Steven Suting – implied through discussion of guardrails and testing)
Temina described a multi-stage evaluation model, Mainak introduced a “golden” data set for validation, and Steven emphasized testing and guardrails, indicating consensus on the need for robust evaluation [269-279][418-419][350-359].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses of digital transformation in development contexts stress the need for robust impact-measurement frameworks to inform decisions and ensure accountability [S45]. Safeguarding initiatives further highlight the importance of independent, evidence-based evaluation of AI interventions [S46][S47].
Similar Viewpoints
Both stress that collaboration among NGOs within the cohort reduces duplication and leverages existing solutions, enhancing efficiency [368-382][350-359].
Speakers: Erica Arya, Steven Suting
Cross‑NGO collaboration avoids reinventing solutions and encourages use of existing tools (Erica Arya) Cross‑NGO collaboration within the cohort prevents duplicated effort and enables shared model development (Steven Suting)
Both highlight that structured program design—through pooled resources and mentorship—strengthens NGO participation and outcomes [124-129][236-239].
Speakers: Temina Madon, Manohar Sreekanth
Funding a cohort allows pooling of scarce technical resources (Temina Madon) Structured mentorship, open applications, and leadership buy‑in ensure commitment and relevance of the program (Manohar Sreekanth)
Both argue for a problem‑driven, responsible approach that prioritizes real needs and evaluation over flashy technology [258-266][410-418].
Speakers: Steven Suting, Mainak Roy
Designing AI for behavior change shifts the problem to a behavior‑science challenge, highlighting the need for responsible AI safeguards (Steven Suting) Build solutions that are needed and scalable, not just “sexy” technologies; incorporate evaluation mechanisms early (Mainak Roy)
Unexpected Consensus
Both funders (Temina) and technology partners (Erica) see shared technical staffing as a core solution
Speakers: Temina Madon, Erica Arya
Funding a cohort allows pooling of scarce technical resources (Temina Madon) Knowledge partners embed responsible AI principles and provide expertise (Erica Arya)
It is surprising that a funder and a technology organization independently converge on the idea of shared technical expertise (staff and knowledge partners) as a primary mechanism to overcome NGO capacity gaps, indicating a cross-sector alignment on resource pooling [124-129][300-307].
POLICY CONTEXT (KNOWLEDGE BASE)
The concept of shared technical staffing is mirrored in pooled funding mechanisms such as the UN Central Emergency Response Fund, which aim to provide joint technical resources across agencies [S51]. Coordination between funders and tech partners is also advocated as a pathway to coordinated AI deployment [S31].
Overall Assessment

The panel shows strong convergence on the value of cohort‑based programs, early integration of responsible AI, pragmatic use of open‑source tools, needs‑first design, collaborative learning, and rigorous evaluation. These shared positions suggest a mature, community‑driven approach to AI for development that balances innovation with ethical safeguards and impact measurement.

High consensus across speakers, indicating a cohesive understanding that collaborative, well‑structured, and ethically grounded cohort models are essential for effective AI deployment in the social sector.

Differences
Different Viewpoints
How to develop AI solutions for NGOs – open‑source first versus building custom tools
Speakers: Erica Arya, Mainak Roy, Steven Suting
Prioritize existing open‑source platforms that meet most needs before investing in custom development (Erica Arya) Simple Education’s WhatsApp chatbot assists teachers … using a familiar platform (Mainak Roy) Quest Alliance builds AI‑driven, personalized learning pathways to address heterogeneous learner needs and teacher shortages (Steven Suting)
Erica advises NGOs to first assess and adopt existing open-source tools that satisfy 60-70 % of requirements before creating bespoke solutions [426-433]. In contrast, Mainak describes a custom WhatsApp-based chatbot built specifically for teachers, and Steven outlines a purpose-built AI platform for personalized learning, both representing custom development approaches [155-163][165-174]. All aim to deliver effective AI for education, but they differ on whether to start with existing open-source solutions or to build new tools from scratch.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open-source adoption and bespoke development is reflected in sector discussions that champion open educational resources and shared prototypes to avoid duplication [S39][S40], while also recognizing occasional needs for custom solutions in specialized contexts [S38].
Methods for embedding responsible AI and safety safeguards
Speakers: Erica Arya, Steven Suting, Manohar Sreekanth
Knowledge partners (Digital Future Labs, Statle) embed responsible AI principles and safety considerations from the outset (Erica Arya) Designing AI for behavior change shifts the problem … highlighting the need for responsible AI safeguards (Steven Suting) Did any one of you use any of those things? … guardrail plugins … (Manohar Sreekanth)
Erica describes a formal partnership with Digital Future Labs and Statle to integrate responsible AI and AI safety into project designs from the beginning [300-307]. Steven emphasizes that moving to behavior-change objectives turns the work into a behavior-science challenge that requires responsible AI guardrails, but he does not reference formal partners, focusing instead on internal design considerations [258-266][360-366]. Manohar asks whether participants have used specific guard-rail plugins, indicating uncertainty about actual implementation [324-329]. The disagreement lies in the preferred mechanism: structured external partnerships versus internal design focus and uncertain adoption of concrete tools.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on responsible AI methods echo the broader policy discourse on early-stage engagement of developers [S34] and the need for context-specific governance frameworks that integrate safety assessments and proactive safeguards throughout the AI lifecycle [S42][S44].
Resource allocation – cohort funding with pooled technical staff versus individual NGO funding
Speakers: Temina Madon, Erica Arya
Funding a cohort allows pooling of scarce technical resources (e.g., part‑time AI engineers, product managers) to serve multiple NGOs efficiently (Temina Madon) The AI cohort was anchored by Project Tech for Dev … mentors were paired with each nonprofit … (Erica Arya)
Temina argues that the Agency Fund’s cohort model creates a shared pool of ten technical staff and product managers to support several NGOs, addressing the chronic skill gap [124-129]. Erica describes a mentorship model where each nonprofit receives one or two mentors, but does not mention a pooled technical staff pool, implying a more individualized mentor assignment rather than a shared resource pool [310-312]. The tension is between a centralized pooled staffing approach versus a mentor-centric model without explicit shared technical resources.
POLICY CONTEXT (KNOWLEDGE BASE)
The trade-off between pooled cohort funding and isolated NGO budgets is illustrated by analyses of pooled funding instruments like the CERF, which promote shared technical capacity across organizations [S51], contrasted with discussions on coordinated policy action that require clear allocation mechanisms [S31].
Unexpected Differences
AI’s capability to handle administrative tasks versus practical limitations
Speakers: Elizabeth, Manohar Sreekanth
AI can be used for administrative facilitation such as introducing panelists, though practical limitations remain (Elizabeth) Manohar repeatedly experiences audio issues and cannot hear Elizabeth, highlighting technical constraints (Manohar Sreekanth)
Elizabeth attempts to introduce panel members using AI, suggesting AI can manage routine admin duties [5-7]. However, Manohar repeatedly notes that he cannot hear her and experiences technical glitches, exposing the current limitations of AI-driven facilitation [10-13][14-17]. The contrast between the claimed capability and the observed failure was not anticipated given the panel’s focus on AI for development projects.
POLICY CONTEXT (KNOWLEDGE BASE)
Observations that AI adoption stalls due to data readiness and governance gaps, and that practical limitations curb expectations for fully automated administrative workflows, are documented in recent assessments of agentic AI deployment challenges [S49].
Overall Assessment

The discussion reveals consensus on the value of cohort‑based mentorship and the need for a needs‑first, responsible AI approach. Disagreements centre on the preferred development pathway (open‑source reuse vs custom builds), the mechanism for embedding responsible AI (formal partner integration vs internal design and uncertain guard‑rail adoption), and the resource model (pooled technical staff vs mentor‑centric assignments). These divergences reflect differing operational philosophies rather than fundamental opposition, suggesting that the panel can converge on hybrid models that combine shared resources, open‑source foundations, and tailored safeguards.

Moderate – while participants share overarching goals (effective, ethical AI for NGOs), they propose distinct methods. The moderate level of disagreement indicates room for collaborative synthesis, but also highlights the need for clear guidelines on resource allocation, safety integration, and development strategies to avoid fragmented efforts.

Partial Agreements
All three agree that a cohort structure, combined with mentorship and leadership buy‑in, is essential for effective capacity building and learning among NGOs. However, Erica focuses on peer learning and platform iteration, Temina on resource pooling, and Manohar on program design and mentorship logistics, showing different emphases on how the cohort achieves its goals [70-74][124-129][236-239].
Speakers: Erica Arya, Temina Madon, Manohar Sreekanth
Cohort model enables deep learning, peer sharing, and iterative platform improvement (Erica Arya) Funding a cohort allows pooling of scarce technical resources … (Temina Madon) Structured mentorship, open applications, and leadership buy‑in ensure commitment and relevance of the program (Manohar Sreekanth)
All three stress the importance of a needs‑first approach before building AI solutions. Pritam emphasizes starting from pain points, Steven stresses needs assessment to avoid solution‑first, and Erica recommends evaluating open‑source options first. Their agreement is on the goal of relevance, but they differ on the specific step (pain‑point analysis, broader needs assessment, or open‑source evaluation) [400-404][360-366][426-433].
Speakers: Pritam Sukumar, Steven Suting, Erica Arya
Begin with concrete pain points rather than AI hype; assess whether AI truly adds value (Pritam Sukumar) Conduct a thorough needs assessment and avoid a solution‑first mindset to ensure alignment with beneficiary outcomes (Steven Suting) Prioritize existing open‑source platforms that meet most needs before investing in custom development (Erica Arya)
Takeaways
Key takeaways
A cohort‑based AI program enables deep learning, peer sharing, and iterative improvement of open‑source platforms for NGOs. Pooling scarce technical resources (part‑time AI engineers, product managers) through a funder‑run cohort is more efficient than individual NGO hiring. Structured mentorship, open applications, and leadership buy‑in are essential for commitment and relevance. Three education‑focused AI pilots were highlighted: a WhatsApp chatbot for teacher pedagogy (Simple Education), personalized learning pathways for students (Quest Alliance), and AI‑generated student‑report scripts for mentor‑teacher conversations (Avanti Fellows). Implementation challenges included bot interaction failures, large‑language‑model hallucinations, and the need for prompt‑engineering and “golden” data sets to build trust. Designing AI for behavior change shifts the problem from pure software to a behavior‑science challenge, underscoring the importance of responsible AI safeguards. Embedding responsible AI and AI‑safety principles from the outset (via knowledge partners Digital Future Labs and Statle) proved valuable. Cross‑NGO collaboration within the cohort prevented duplicated effort and facilitated shared model development. A four‑level evaluation framework (user, monitoring, impact, etc.) is critical for social‑impact projects but rarely used in the private sector. Practical recommendations for NGOs: start from concrete pain points, prioritize existing open‑source solutions, build only what is needed and scalable, and integrate evaluation mechanisms early.
Resolutions and action items
Panelists agreed to continue sharing documentation, blogs, and the evaluation framework with interested NGOs. Simple Education will implement a workaround for the bot’s initial greeting and refine conversation guardrails. Avanti Fellows will continue prompt‑engineering and develop a “golden” data set to reduce LLM hallucinations before full rollout. Quest Alliance will further explore behavior‑science modeling and integrate responsible‑AI guardrails identified by knowledge partners. Project Tech4Dev will maintain the mentorship pool (technical staff and product managers) for future cohorts. All NGOs were encouraged to assess existing open‑source platforms before embarking on custom builds.
Unresolved issues
How to systematically eliminate or mitigate LLM hallucinations in production‑grade tools remains an open technical problem. The optimal balance between building custom AI solutions versus adopting/buying existing models (e.g., Gemini, Claude) was not decided. Long‑term sustainability and scaling of the cohort model beyond the pilot year were not fully addressed. Specific metrics for measuring behavior‑change impact in Quest Alliance’s solution are still under development. Details on integrating AI‑safety plugins (e.g., slur‑list, guardrail tools) into each NGO’s product pipeline were discussed but not finalized.
Suggested compromises
Instead of requiring NGOs to hire full‑time AI engineers, the funder provides a shared pool of part‑time engineers and product managers. The chatbot’s strict “hi” requirement was relaxed to accept any first message as a greeting, allowing the flow to continue. NGOs were advised to adopt open‑source components and only add custom layers where gaps exist, reducing development effort while still meeting specific needs. Mentors and knowledge partners offered guidance on responsible AI early, balancing rapid prototyping with safety considerations.
Thought Provoking Comments
Our platforms are not top‑down; they are always listening from the ground to what nonprofits need. Cohort‑based programs let us learn deeply from a small set of NGOs, feeding those learnings back into the platforms we build.
Highlights a user‑centric, iterative design philosophy that contrasts with typical top‑down tech deployments, emphasizing the value of close field engagement.
Set the foundation for discussing why the cohort model was chosen, prompting Tamina and others to elaborate on the benefits of peer learning and co‑creation, and steering the conversation toward the importance of grounding AI solutions in real‑world NGO contexts.
Speaker: Erica Arya
In Silicon Valley we start ventures in cohorts (e.g., Y Combinator). We brought that philosophy to the Agency Fund—creating a pool of shared technical staff and product managers for NGOs that can’t afford full‑time AI engineers.
Introduces the accelerator/cohort concept from the startup world into the nonprofit sector, proposing resource pooling as a scalable solution to talent scarcity.
Shifted the discussion from abstract program goals to concrete operational mechanisms, leading to deeper dialogue about how shared resources lower barriers and how this model differs from traditional grant‑making.
Speaker: Tamina Madon
We built a WhatsApp chatbot for teachers, but the bot would glitch if the teacher didn’t start with a simple ‘hi’. We had to redesign the flow to treat any first message as a greeting and add guardrails on permissible questions.
Exposes a real‑world usability challenge that reveals the gap between AI design assumptions and on‑the‑ground user behavior, emphasizing the need for flexible conversational design.
Prompted other panelists to discuss similar implementation hurdles, leading to a broader conversation about user onboarding, error handling, and the importance of iterative testing in low‑resource settings.
Speaker: Mainak Roy
We’re seeing hallucinations where numbers are correct but the direction (increase vs. decrease) is wrong. This forces us into extensive prompt engineering and raises trust issues for teachers.
Draws attention to a core technical limitation of LLMs—hallucinations—and its practical implications for decision‑making in education, highlighting the need for rigorous validation.
Steered the dialogue toward AI reliability and safety, prompting Steven and Erica to discuss responsible AI practices and the role of knowledge partners in mitigating such risks.
Speaker: Pritam Sukumar
When we tried to solve the problem we realized it wasn’t just a software issue; it became a behavior‑science problem. We need to emulate human signals (tone, confidence) to make the bot truly helpful.
Reframes the technical challenge as an interdisciplinary one, stressing that effective AI in education must incorporate behavioral insights, not just algorithms.
Created a turning point where the conversation moved from technical implementation to the broader design philosophy, influencing others to consider human‑centric metrics and prompting discussion of responsible AI integration.
Speaker: Steven Suting
We partnered with Digital Future Labs and Statle to embed responsible AI and AI‑safety principles from day one, rather than as an afterthought.
Emphasizes proactive ethical design, contrasting with the common practice of retrofitting safety, and showcases a concrete model for integrating ethics into nonprofit AI projects.
Deepened the conversation on governance, leading to reflections on guardrails, slur lists, and how such safeguards were actually applied in the Simple Education chatbot, reinforcing the theme of responsible AI.
Speaker: Erica Arya
Most private‑sector products never reach the fourth level of evaluation—impact evaluation at scale. In the social sector we aim for that, using a four‑level framework (user, monitoring, impact).
Introduces a comprehensive evaluation framework rarely seen in for‑profit tech, highlighting the sector’s commitment to measurable social outcomes.
Prompted participants to discuss how they assess effectiveness, leading to mentions of the “golden data set” and reinforcing the importance of rigorous impact measurement throughout the program.
Speaker: Tamina Madon
Don’t jump on building custom solutions if an open‑source platform meets 60‑70 % of your needs. Start there, then iterate or add components as required.
Offers a pragmatic, cost‑effective strategy that challenges the impulse to over‑engineer, encouraging reuse and incremental development.
Served as a concluding actionable takeaway, resonating with earlier points about resource constraints and influencing the audience’s perception of sustainable AI adoption.
Speaker: Erica Arya (final remarks)
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved the conversation from high‑level program description to concrete, on‑the‑ground realities of building AI for NGOs. Erica’s emphasis on ground‑up learning introduced the need for user‑centric design, which Tamina reinforced with the accelerator/cohort model and shared resource pools. Mainak’s anecdote about the WhatsApp bot and Pritam’s experience with LLM hallucinations highlighted practical implementation challenges, prompting a shift toward deeper considerations of reliability and ethics. Steven’s reframing of the problem as a behavior‑science issue and Erica’s early integration of responsible AI practices broadened the scope to interdisciplinary design and governance. Tamina’s evaluation framework and Erica’s final advice on leveraging open‑source solutions provided concrete, actionable guidance. Collectively, these comments redirected the dialogue toward collaborative, ethical, and impact‑focused AI development, underscoring the unique collaborative ethos of the nonprofit sector compared to the for‑profit world.

Follow-up Questions
Did any of the NGOs use guardrail plugins (e.g., slur list, Lama plugins) for AI safety?
Manohar asked this directly but no panelist answered, leaving uncertainty about whether safety mechanisms were actually deployed.
Speaker: Manohar Sreekanth
Did you evaluate Gemini or Claude models? What was the build‑vs‑buy decision?
Manohar raised this at the end of the session; the panel did not provide an answer, so the choice of large language model remains unclear.
Speaker: Manohar Sreekanth
How can hallucinations in LLM‑generated teacher scripts be mitigated?
Pritam highlighted hallucinations as a major negative feedback; further research is needed on prompt engineering or validation techniques to improve trust.
Speaker: Pritam Sukumar
What are effective methods for creating a “golden set” to improve AI response alignment?
Mainak mentioned a “golden set” as part of their evaluation; the process for building and maintaining such a dataset requires deeper investigation.
Speaker: Mainak Roy
How can responsible AI principles be tailored for different user demographics (e.g., 25‑year‑old vs. grade‑8 student)?
Steven noted that responsibility means different things for different audiences; guidelines are needed to operationalize this nuance.
Speaker: Steven Suting
What evaluation frameworks are most suitable for measuring impact of AI solutions in the social sector?
Tamina described a four‑level framework (user, monitoring, impact, etc.) and suggested it is rarely used in the private sector; its applicability and validation merit further study.
Speaker: Tamina Madon
How can NGOs efficiently share resources (e.g., technical staff, product managers) across projects to reduce costs?
Tamina described a pooled pool of technical staff; research is needed on models for resource pooling and cost‑effectiveness.
Speaker: Tamina Madon
What mechanisms best facilitate collaboration among NGOs to avoid duplicate effort?
Erica gave examples of NGOs building similar predictive models and collaborating; systematic approaches to foster such collaboration are an open area.
Speaker: Erica Arya
What are the barriers to scaling AI solutions to thousands of teachers/students in India, and how can they be addressed?
Mainak emphasized scale as a key metric and noted challenges; identifying and mitigating scaling barriers requires further investigation.
Speaker: Mainak Roy
How can AI be integrated into existing workflows (e.g., WhatsApp) while maintaining usability and data privacy?
Mainak’s solution leveraged WhatsApp for ease of adoption; research is needed on best practices for embedding AI in ubiquitous platforms securely.
Speaker: Mainak Roy
What are best practices for securing leadership buy‑in in NGOs for AI projects?
Erica stressed leadership buy‑in as essential for program success; systematic strategies to obtain and sustain it are a research gap.
Speaker: Erica Arya
What are the cost implications of providing part‑time AI engineers versus full‑time for NGOs?
Tamina suggested a pool of part‑time technical staff; a cost‑benefit analysis would clarify the optimal staffing model for NGOs.
Speaker: Tamina Madon
How can AI safety and responsible AI be operationalized from the start in nonprofit projects?
Erica highlighted embedding responsible AI via knowledge partners; detailed frameworks and toolkits for early integration are needed.
Speaker: Erica Arya
What are the most common AI use cases in NGOs beyond education (e.g., fundraising, grant writing) and their outcomes?
Pritam mentioned AI for fundraising and code generation; a broader mapping of high‑impact use cases across sectors would inform future programs.
Speaker: Pritam Sukumar
How can prompt engineering be systematized to reduce errors in AI‑generated content for education?
Pritam’s experience with hallucinations points to a need for systematic prompt‑engineering methodologies tailored to educational contexts.
Speaker: Pritam Sukumar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Indias Roadmap to an AGI-Enabled Future

Indias Roadmap to an AGI-Enabled Future

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine “India’s Path to an AGI-Enabling Ecosystem,” emphasizing that building sovereign frontier AI models requires coordinated advances in energy, compute and research rather than importing existing solutions [2-7]. Ghanshyam Prasad outlined that India’s data-center load is projected to reach about 16 GW, with new facilities of 10-1 000 MW demanding “N + 1 + 1” reliability and variable-load management, which strains the current grid [30-35]. He noted that renewable capacity has risen from 2 GW to over 250 GW, with 40 GW added in the past year, and that future stability will rely on hydro-pump storage (target ≈ 100 GW in ten years) and a phased nuclear programme aiming for 100 GW by 2047 [56-61][76-78]. To cope with the growing demand, the grid planning cycle has been accelerated to six-month transmission updates and annual resource-adequacy reviews, allowing rapid correction of forecasting errors [221-229][230-235].


Tarun Dua estimated that serving the top 1 000 organisations would require at least 128 000 GPUs in India today, and that the country could eventually provide compute for the global market once the infrastructure scales [194-204][211-217]. He also advocated for locating data centres near small modular reactor (SMR) sites to obtain reliable, low-cost power, though he cautioned that SMR deployment may take longer than three to five years and will need containment zones away from dense populations [238-246]. Professor Jayadeva highlighted a shortage of motivated researchers, pointing out that most Indian graduates still prefer PhDs abroad because domestic career pathways and funding are unclear, and that stronger industry-university collaborations and clearer incentives are needed to retain talent [108-113][115-118][254-260]. He further argued that goal-directed, industry-funded projects often remain at arm’s length, and that a mix of short-, medium- and long-term research, supported by programmes such as the VLSI Design Tools and Technology Program, can foster innovation and translation to products [307-313][324-330].


Parth Sarthi explained that scaling laws show a predictable performance gain for every ten-fold increase in compute, turning AI progress into an engineering problem that can be addressed by expanding GPU resources and building domain-specific reinforcement-learning environments using India’s large pool of experts [131-140][144-148][155-162]. He emphasized that India’s multilingual, voice-first population enables unique RL environments for sectors like agriculture and law, and that the India mission’s GPU expansion (now over 50 000 GPUs) provides the compute backbone for such indigenous models [176-185][188-189]. In the audience Q&A, concerns were raised about the lack of IoT and smart-meter infrastructure for end-to-end data collection; Prasad responded that while generation-to-transmission is robust, distribution-level automation and smart-meter rollout (3 crore installed, 25 crore planned) are being accelerated to generate the data needed for AI and secure it within India [367-374][375-383][400-403].


The discussion concluded that, although significant challenges remain in power reliability, compute capacity, talent retention and data sovereignty, coordinated government programmes, industry initiatives and the country’s demographic advantages position India to build a self-sufficient AGI ecosystem [78-79][190-193][496-498]. Overall, the panel agreed that a holistic approach linking energy, compute and research is essential for India to achieve sovereign AI leadership and that ongoing policy and investment efforts are aimed at closing the existing gaps [2-7][496-498].


Keypoints

Major discussion points


Energy infrastructure must be expanded and hardened to support AI-driven data centres.


The panel highlighted the rapid growth of data-centre capacity (from 10 MW to gigawatt-scale) and the need to manage highly variable, high-reliability loads (N + 1 + 1 redundancy) [30-34][35]. Very large facilities (≥1 GW) will require 1.7-2 × oversized supply [43-48], while India’s renewable rollout (≈250 GW and growing) is intended to meet green-power demands [56-61]. Grid planners are moving to dynamic, six-monthly transmission upgrades and annual resource-adequacy reviews to keep pace with the projected 16 GW data-centre load [221-230].


Compute capacity and GPU scaling are critical for a sovereign AI ecosystem.


Parth Sarthi explained that scaling laws make AI an engineering problem: every 10× increase in compute yields predictable performance gains, turning the race into a “spending competition” for GPUs and electricity [131-148]. New “reasoning” models shift compute from pre-training to RL-based inference, which can run on distributed, lower-cost hardware [155-162]. Tarun Dua projected that India will need at least 128 000 GPUs to serve its top-1000 organisations, and that the country could eventually process 40-50 % of global data, far beyond the current 3 % share [194-208][211-218].


A robust talent pipeline and industry-academia collaboration are essential.


Professor Jayadeva stressed the shortage of motivated researchers, the lack of clear career pathways for PhDs, and the need for incentives (e.g., industry-funded fellowships) to retain talent [108-118][248-267]. He argued that both goal-directed and curiosity-driven research are needed, and cited successful university-industry programs (e.g., VLSI Design Tools and Technology Programme) that have produced patents and attracted top students [307-317][478-485].


Building the physical data-collection layer (IoT, smart meters, SCADA) is a missing piece for AI data sovereignty.


An audience member pointed out the lack of end-to-end sensor networks and digital twins in the power sector [351-358]. Ghanshyam Prasad responded that while generation-to-transmission infrastructure is mature, distribution-level automation is lagging; the RDSS programme and rollout of 30 million smart meters aim to create the data backbone and indigenous SCADA systems [367-380][400-409]. He also noted that the forthcoming India Energy Stack will address these gaps [425-433].


Overall purpose / goal


The session was convened to map “India’s Path to an AGI-Enabling Ecosystem,” identifying how the three pillars-energy, compute, and research talent-must be coordinated under the NDIA mission to build sovereign frontier models that can solve India-scale problems [2][7][14][19].


Overall tone


The discussion began with a formal, optimistic framing of India’s strategic ambition. As speakers delved into technical details, the tone shifted to a candid acknowledgment of substantial challenges (grid reliability, compute shortfalls, talent drain). Throughout, a collaborative and solution-oriented spirit persisted, ending on a hopeful note that coordinated policy, industry, and academic effort will overcome the gaps [126-130][190-193][426-433].


Speakers


Parth Sarthi – Co-founder of Chariot; former Stanford PhD student and professor; previously worked on the Google Gemini DeepMind team (DeepThink project); inventor of the Raptor retrieval-augmented generation technique [S2].


Shri Ghanshyam Prasad – Chairperson, Central Electricity Authority (CEA); veteran of the Indian power sector with ≈ 35 years experience; former member of the G20 Energy Transition Working Group and first Executive Director of the BIMSEC Energy Centre [S3].


Tarun Dua – Founder and Managing Director, E2E Networks; leads enterprise-grade cloud and GPU infrastructure (H100, H200, B200) for AI enablement [S6].


Professor Jayadeva – GSV Chair Professor and former Head of Department, Electrical Engineering, IIT Delhi; alumnus of IIT Delhi; known for early SVM-based AD-converter on-chip and Minimal-Complexity-Machine research [S10].


Suvrat Bhoosha – Co-founder, Chariot; moderator and host of the “India’s Path to an AGI-Enabling Ecosystem” panel [S11].


Audience – Members of the live audience who asked questions (e.g., Pradeep Subramaniam). No specific institutional role provided [S13].


Additional speakers:


None (all participants are covered by the speakers names list).


Full session reportComprehensive analysis and detailed insights

The session opened with Suvrat Bhoosha framing the summit’s ambition: to chart “India’s Path to an AGI-Enabling Ecosystem” and to stress that true frontier intelligence must be built on home-grown power, hardware and talent rather than on imported models or borrowed infrastructure [2-7][14-19]. He introduced the three pillars of the discussion – energy, compute and research – and announced the distinguished panel comprising the Chairperson of the Central Electricity Authority, the founder of E2E Networks, the GSV Chair Professor from IIT Delhi and his own co-founder at Chariot [15-18].


Energy pillar – Shri Ghanshyam Prasad warned that AI-driven workloads will transform data-centre demand from today’s 10-50 MW facilities to gigawatt-scale sites in Mumbai, Vizag, Chennai and elsewhere, with a total projected load of roughly 16 GW across India [30-31][33-35]. He described a four-layer “N + 1 + 1” reliability scheme: two independent supply sources, each backed by two diesel-generator backup layers, a regime unprecedented for a rapidly expanding economy [35][140-148]. Citing projects such as a 50 MW Adani-Google centre in Noida and a forthcoming ≥1 GW hub that will need 1.7-2× oversizing of its power contract [43-48], he highlighted India’s renewable surge from ≈2 GW in 2010-11 to >250 GW today (≈50 % of total generation), with 30 GW added in the previous year and ≈40 GW added so far in the current year (April-January)[56-61][250-258].


Prasad also outlined longer-term energy-security measures: a target of ~100 GW of pumped-hydro storage within ten years [115-122], a roadmap for ~100 GW of nuclear capacity by 2047 (≈22 GW by 2032-34) [124-130], and the ability to construct new high-voltage transmission lines in 24-36 months versus 5-10 years in the US/Europe [140-148]. He emphasized the rapid planning cadence – transmission upgrades refreshed every six months and resource-adequacy studies updated annually – enabling swift correction of forecasting errors [221-235].


Internationally, India’s grid is already interconnected with Nepal, Bhutan, Bangladesh and Myanmar and is planning links to the UAE, Saudi Arabia, Singapore and Sri Lanka [70-78]. To diversify data-centre siting, the country is expanding landing points beyond the existing Mumbai and Chennai hubs to include Vizag, Paradeep and Gopalpur[84-92]. Prasad stressed the need for “green-only” power for data centres, advocating placement close to renewable generation zones (e.g., Rajasthan, Gujarat) and dedicated transmission lines to guarantee clean supply [250-258].


On the distribution side, the RDSS programme is funding reliability upgrades and the rollout of smart meters3 crore installed to date with 25 crore planned over the next two-to-three years [373-380]. This will generate granular consumption data for indigenous SCADA systems, keeping data within Indian data-centres and supporting AI-driven grid optimisation [400-409]. When asked about IoT and end-to-end digital twins, Prasad referenced the “India Energy Stack” committee, which is addressing those use-cases [440-445].


Compute pillar – Parth Sarthi presented the compute pillar, invoking the 2020 OpenAI scaling laws: a ten-fold increase in compute yields a predictable reduction in model loss (≈0.07 exponent), turning AI progress into a pure engineering problem solvable by adding GPUs and electricity [131-140][144-148]. He noted that reasoning-oriented models shift much of the heavy lifting from pre-training to reinforcement-learning inference, which can run on distributed, lower-cost hardware [155-162]. The India mission has already provisioned 38 000 GPUs, now exceeding 50 000, forming the backbone for sovereign frontier models [176-185][402-405].


Parth also highlighted the AI-Kosh initiative – a mission-driven effort to curate large, indigenous datasets for training frontier models [380-386]. He argued that continued GPU scaling, combined with algorithmic efficiency gains from RL-based reasoning, will meet near-term compute needs.


Research & talent pillar – Professor Jayadeva identified a critical shortage of researchers willing to stay in India. He pointed out that women face additional parental pressure to settle early, reducing female participation in engineering research [102-108]. He also noted the growing popularity of a 2-year “MS-Research” degree as a more attractive alternative to a 5-year PhD [115-122]. Jayadeva called for a “critical mass of very smart researchers” and for universities to embed industry employees in research labs, thereby reducing exploratory costs and improving IPR sharing [110-113][254-260]. He warned against over-reliance on purely goal-directed, arm-length industry projects and advocated a balanced portfolio of short-, medium- and long-term research, citing the VLSI Design Tools and Technology Programme (since 1996) as a successful model that has produced patents and retained top talent [307-317][324-330][478-485]. Both curiosity-driven science and application-driven engineering, he argued, are essential for building efficient indigenous AI models [308-313].


Panel Q&A and points of contention


Power-source strategy: Tarun Dua championed co-locating Small Modular Reactor (SMR) nuclear plants on data-centre campuses, citing modular 220 MW blocks that could be added incrementally, eliminate transmission fees and remove on-site diesel generators [238-246]. Prasad cautioned that SMR deployment may extend beyond the three-to-five-year horizon, that containment zones (1-5 km radius) force siting away from dense populations, and that the immediate priority remains expanding renewable hubs and diversified landing points [239-246][62-68].


Research governance: Dua urged industry to define clear, time-bound roadmaps for “goal-directed” research and to fund them aggressively, warning that without such direction India would fall behind other nations [287-300]. Jayadeva counter-argued that while industry-driven projects are valuable, they must be complemented by basic, curiosity-driven research and more fluid IPR arrangements; otherwise the ecosystem risks stifling long-term innovation [308-317].


GPU shortfall: Dua projected a national minimum of 128 000 GPUs (128 per top-1 000 organisations) [194-204][208-209][211-218], while Parth highlighted the rapid mission-driven expansion to over 50 000 GPUs and expressed confidence that continued scaling, together with RL-based models, would meet near-term demand [131-148][402-405].


Indigenous hardware: Dua noted that most micro-processor IP is already designed in India but commercialisation stalls due to insufficient domestic volume; scaling demand will unlock full indigenisation [447-452]. Jayadeva added that India excels in VLSI design, yet semiconductor fabs (e.g., Mohali) are still catching up, and that design costs dominate chip development [478-485].


Data sovereignty and domain-specific datasets: Audience members asked how India could generate massive, secure datasets (e.g., agricultural loan assessments in Tamil, legal-aid reasoning in Hindi) and keep them within national borders. Parth reiterated that AI-Kosh is curating such indigenous datasets and that the expanding GPU pool will enable local processing [380-386][402-405].


IoT deployment: The audience pressed for government support to accelerate IoT in low-labour sectors (e.g., vending-machine networks), arguing that without sensor connectivity AI would merely replace cheap labour rather than create new jobs [413-424]. Prasad reiterated that the forthcoming India Energy Stack and ongoing smart-meter rollout aim to address these gaps, while acknowledging that full end-to-end digital twins remain a work-in-progress [425-433][440-445].


In closing, Suvrat Bhoosha summarised the consensus: India must own the full AI stack-from power generation and compute hardware to data and talent-to achieve sovereign frontier models capable of solving “India-scale” problems [190-193][496-498]. He thanked the panel for highlighting the intertwined challenges of energy reliability, GPU scaling, talent retention and data-layer development, and noted that coordinated policy, industry investment and academic reform are essential to bridge the remaining gaps [78-79][190-193][496-498]. The session therefore set a roadmap linking accelerated renewable and nuclear power planning, rapid smart-meter and SCADA deployment, aggressive GPU procurement, and a mixed research portfolio to realise India’s AGI-enabling ecosystem.


Session transcriptComplete transcript of the session
Suvrat Bhoosha

Researchers, founders and policy makers. At Chariot, we are proud to be one of the companies mandated to build frontier models for India under the NDIA mission to build sovereign frontier models for the country. But as we embark on this journey, we must recognize a fundamental truth. Building true frontier intelligence from India is a monumental ecosystem play. We cannot simply import models and talents, run them on borrowed infrastructure, and call them our own. If we want to solve India -scale problems at population scale, we must own the power, the hardware, and the talent and the research that drives them. That is the thesis of today’s session, India’s Path to an AGI -Enabling Ecosystem, to bridge the gap between energy, infrastructure, and research.

We have brought together the absolute pioneers of this field. Before we begin, let me quickly share our roadmap for the next one hour. We will start by inviting each of our distinguished speakers to share opening remarks on their respective domains. After that, we will dive into the topic of the next one hour. We will then move into a panel discussion. And then finally we will open the floor for your questions. To guide through this we have assembled the absolute pioneers of energy, compute and research pillars. Today we are joined by Shri Ghanshyam Prasad, Chairperson of Central Electricity Authority. Mr. Shri Tarun Dua , Founder and Managing Director of E2E Networks. Professor Jayadeva, GSV Chair in Formal HOD of Electrical Engineering at IIT Delhi.

And finally my co -founder at Chariot, Mr. Parth Sarthi. To build this ecosystem from ground up starting with the very power that makes this revolution possible, energy. To speak on the sheer scale of this transition and to help us answer critical questions such as what we expect AI’s true energy demand in the country to be and how are we preparing and modelling our national grid to meet it, I would like to introduce a true veteran of the power sector, Shri Ghanshyam Prasad ji. Shri Ghanshyam Prasad ji, presently holds the post of Chairperson of Central Electricity Authority. With an illustrious career spanning over 35 years, his expertise covers generation, transmission and power market development. Having served as part of the G20 Energy Transition Working Group and the first Executive Director of BIMSEC Energy Centre, his global perspective and visionary leadership are ensuring our grid is ready for the AI era.

Please join me in welcoming Shri Ghanshyam Prasad to the dais for his opening remarks.

Shri Ghanshyam Prasad

I think the speaker wanted me to speak on some of the key challenges that is likely to happen in the era when we are transiting from the present situation to AI -enabled or AI -driven power system. We all know that the moment we talk about AI, which means that it is supposed to be data -intensive and it is to be a power juggler, and we are talking about the data centers which will try to enable it as we go along. In India, we are now transiting from smaller data centers. Earlier, we used to have a small data center. We have 10 megawatt, 50 megawatt data centers to now gigawatt scale data centers at many places. particularly in Mumbai, Vizag, Chennai and all other places.

So far, we have a visibility of around 16 gigawatt of such data centers coming in across India. The challenge remains a few, particularly if I see from the perspective of serving a large load which earlier we thought that it is going to be almost like a constant load, but practically it is not. And if it is not, then how do we manage such type of a variable load? So far, we were struggling with only variable sources, that is solar, wind, etc. Now, we are going to have something from the load side as well. A large load getting integrated into the DESCOM system and which is also going to be used for the solar system. To have a nature which is going to be variable.

second is the kind of reliability that is that it demands into the system the reliability is we talk about n plus one plus one now which means that the same data center will have to be supplied from two different sources and they have to be slightly differently located as well second is even if the supply fails then it has to be backed up simultaneously by two I’ll say two steps that is DG sets and each DG set will have to be backed up by another DG sets so we have four layer almost four layer of security of supply it’s definitely challenging for a country like India which is now expanding and growing to provide such kind of a reliability but still we are geared to meet this kind of a challenge.

Some of you who have not seen Adani data centers, I’ll request you. It’s very close to Delhi. It’s in Noida, which is coming up. That is 50 megawatt data center being built by Adani and would be operated by Google. 10 megawatt has already been commissioned and rest 40 megawatt is in the pipeline. You can see the structure that is coming and the kind of challenges it is facing. But still, I must congratulate UP Discom who have been able to provide this kind of a reliable supply to that data centers. But this is 50 megawatt. But very soon you will find a data center coming in Mumbai area, which will be of a thousand megawatt. YJ, which may be even more than thousand megawatt.

Thousand megawatt. But the moment I say it. means the supply that will be required to this particular center will be at least 1 .7 times. That’s the near thumb rule. It may require I think sir is saying that it will be required 2 times at least. But data center which I visited has been designed for 1 .7 times of the data center’s capacity. So the challenge is first is how do I maintain a variable load? How do I meet the N plus 1 plus 1 criteria of supply? Some of you researchers who are sitting here probably must be aware about these kind of things before we try to design this kind of a thing. What is further more going to be more challenging is because these data centers are also planning to go green.

That means had they taken a mixed supply probably the challenge of DISCOM would have been slightly lesser. But if you want to classify yourself exactly green data centers, then that means I need to ensure you that only green power flows into your data center, which means a combination of solar, wind, battery, hydro, hydro pump storage, or any such type of a combination, which we’ll be able to ensure to you. And that means I need to ensure a transmission line from such sources to your place so that at least there is no interruptions in the supply of green power being provided to you. But let me assure you that India is geared up for that kind of a challenge because we have started the journey of energy transition somewhere in 2011 or 2010, wherein we started.

We started with a meager figure of somewhere around 2 gigawatt. Now we are more than 50 % in terms of renewable in the country. surpassing 250 gigawatt and which has majority share coming from the solar and then from wind and storage is now kicking into the system. Last year, we surpassed 30 gigawatt in a single year. This year, in just 10 months, starting from April till January, we have already crossed 40 ,000 megawatt, which means that probably in this particular year, we will have more than 50 gigawatt coming in in a single year. So even if the data centers or the AI -driven systems demand green power, I think the country is geared up to that kind of a challenge. Further, what we are suggesting to the data centers is, please try to have a diversity.

Diversity means don’t have at a single location. Try to be as close to REOs. Try locations as possible. slightly away from the main town and diversified locations. So far we have two landing points in the country. It is Mumbai and Chennai, but we are trying to diversify that as well. So we can have multiple landing points in the country so that the data centers can come at multiple locations and so that at least the challenges of the DISCOMs will get diversified. Coming back to the further stability, and since I have been asked for the international scenario as well, so India is also connected to Nepal, Bhutan, Bangladesh, and to some extent to Myanmar. And we are also promoting to very soon connect with UAE and Saudi Arabia and Singapore.

And even Sri Lanka. So if the moment we are going towards the other countries, that means we will try to have both. not only powered, but we’ll try to have the communication network as well. And there are situations which is emerging that maybe these countries will like to have the data centers in India and get supported through that kind of a systems. But all these definitely as we go along will require huge amount of balancing power and storage capacity. So right now we are depending on two major sources. Either it is hydro pump storage or coming from the battery energy storage systems. Recently we launched a report which gives us a confidence that we’ll try to have somewhere around 100 gigawatt of hydro pump storage coming within next 10 years, which is going to be a very good support to meeting or meeting the 24 hours supply to, or the supply to these data centers.

So we’re going to have to wait and see what happens. similarly because we have to cut down our carbon footprint so we are also trying to have a roadmap for 100 gigawatt of nuclear which is targeted to 2047 but there is a visibility even as of now that we go in a fleet mode and we will be trying to achieve somewhere around 22 gigawatt by 2032 or 2034 and then moving up further with more technology kicking in and more expertise being gained particularly from the private sectors and all other sectors so holistically if I see we have huge amount of challenges but to meet those kind of a challenge we have good mix of resources in the country and the country is geared up to meet those kind of challenges the country is also able to make the transmission lines in a record time as compared to anywhere in the world if you see we are able to provide connectivity in 24 months to 36 months time frame in the country as compared to anywhere if you take US etc they take around 10 years to give grant a connectivity that’s the kind of waiting list that they have if you take even European countries they also take more than 5 years for building the transmission lines so at the end I will only say that we are totally geared up for any kind of innovations all the youngsters are welcome from across the world to set up their systems here and I can assure you that the country is fully equipped and fully geared up to support you thank you so much applause

Suvrat Bhoosha

Thank you sir for setting the stage with those vital insights on our energy readiness I just quickly ask the panel to get together for a group photograph applause Thank you. Thank you. Thank you. By delivering enterprise -grade cloud infrastructure at significantly lower costs, he is democratizing AI and empowering over 10 ,000 innovators with advanced H100, H200, and B200 GPU infrastructure. His work is building the foundational infrastructure that enables our sovereign AI ambitions. I would now like to invite Tarun to the dais for his opening

Tarun Dua

Yeah, thanks, SUvrat. So, like, thank you for this opportunity to be a part of this August panel. So, building infrastructure is something we have been doing since 2009. So, 2009, when we began our journey as E2E Networks, like, most of the… So, there is an incident even 10 years before that. So, when… Yeah. We had a startup plan, like I think somewhere around 2006 era or something like that, or even before that. So, we were discussing, three or four friends who were working in the IT industry, oh, we are going to make a website, and this is what the website is going to do, and this is how the website will make money. So, the fourth guy asked the question, but who is paying for building the website?

So, the idea was that, like, it is always someone in the West who is outsourcing the development of the website to you guys, and you are building the website for them, not for India. So, that was the era, once upon a time in India, where we used to do everything for the world, not for ourselves. So, the second stage was when we started doing things for ourselves. Now, the third stage is what we are doing today as a country. We are saying that, like, Like not only are we going to do things for ourselves, we are going to do things for the world. And we become the innovation hub and the innovation capital of building cloud infrastructure for the world.

So with that, I would like to kind of like once again hand over the stage back to Suvrat.

Suvrat Bhoosha

Thank you so much, Tarun, for sharing how E2E got started and the vision behind starting E2E networks. But raw compute and energy are just untapped potential without human ingenuity and the mathematical rigor to harness them. It is my deep honor to introduce Professor Jayadeva, the GSV Chair Professor and former Head of Department of Electrical Engineering at IIT Delhi. An alumnus of the same department, Professor Jayadeva. Jayadeva is a trailblazer whose internationally recognized work bridges theoretical mathematics and practical AI. His group was amongst the first to fabricate an SVM -based AD converter on chip. His recent work on minimal complexity machines provides astounding model size reductions of up to 300 times. His contributions to optimization and machine learning are vital to building highly efficient indigenous models.

Sir, we would look forward to your opening thoughts on how India can contribute to the research and talent pipeline for building artificial intelligence models from the country.

Professor Jayadeva

VLSI and as it turns out there are a host of issues that need if you ask me serious discussion and brainstorming. Primary among them is the issue of manpower. The entire development at one time if you remember Silicon Valley the word IC used to actually jokingly be referred to as Indians and Chinese. So the intellectual innovation that came to build Silicon Valley and most of the entities there that are known today came out from Indian universities, came out from the IITs and all a few decades back. Question is what would it take for example to build that same kind of ecosystem here and you need to have a critical mass of very smart researchers doing work within the country.

And we have to supply the reasons for them wanting to do that. first of that first amongst that is what’s the career connect for a student wanting to pursue his or her PhD or any other research degree for that matter at a university here most of our so I should just put a disclaimer all of my comments are my own personal comments and not representative of IIT Delhi before I continue further but if you ask a student today well a lot of them come to us for recommendation letters and in most cases the first choice wanting to do a research degree would be a university outside that has to change and it is changing but it’s changing slowly what are the reasons for that think of a student who decides they are excited by research wants to do it within the country wants to do it at university here what’s the career after that that connect is directly visible if you look at a research lab you look at a university research lab in the US or elsewhere that connect is missing in most places not because there are no industry driven projects or so on but the nature of those projects is different in many of the successful examples that I can discuss when we have that panel itself the instances are where the university has embedded their researchers within let’s say university along with other students along with other researchers and those who are working for their PhD are already for example employees working in the university environment the scalability of research is very difficult within the industry it’s expensive to explore ideas because maybe out of 10 or even 50 ideas that you explore it’s very difficult to explore one becomes successful and ends up returning revenue to you it is far cheaper to do that exploration within the university environment we have to find models that allow universities and industry to work together but also to find ways so that the biggest bottleneck of IPR sharing which is really the bone of contention or really the key point in most MOUs that you sign this particular aspect is handled more seamlessly and in a simplified fashion the other difficulty is of course with regard to the way the entire ecosystem is configured there is a deterrent amongst many Indian parents from their parents to the children in a sense why don’t you finish your current degree first join a job and worry about a higher degree or PhD later the difficulty with research is it is best done when people are not doing the research when people are in their prime when they are overflowing with new ideas Because once they’re in a job, they get saddled with other responsibilities, you know, familial, others, and so on.

And it never ends up being the same story, let’s say, a few years down the line. This particularly hits women candidates harder because there’s also pressure, you know, although I don’t want to make it a generic statement, but there’s a pressure amongst many of them from their parents to get settled early. So we find, as a consequence, fewer women in research, in engineering research, let’s say, particularly, as compared, let’s say, to male candidates. And finally, the incentive in terms of what people get if they join a research career and eventually join industry or elsewhere, that incentive needs to be made far sharper and far clearer today. Okay. If a student joins an industry today after their undergraduate…

Thank you. degree and works there for a while. Many of them continue doing research in the industrial setting. But as I said, exploration is costly within the industry itself. And so unless the student has a clear -cut motivation to do outstanding research early on so that the industry or whatever career option offers them a significant incentive to do that, I think the scalability will be missing. So I’ll stop. I think I probably have taken more time than I should have, but we can discuss.

Suvrat Bhoosha

Thank you, Professor. I think your vision for preparing the next generation of researchers and what it takes to incentivize them is exactly what this ecosystem needs to thrive. Finally, I’d like to introduce my co -founder at Chariot, Mr. Parth Sarthi. Parth Sarthi went to Stanford to do his PhD in engineering and he was a professor at Stanford. He did his undergraduate and master’s degree in computer science. and more recently was working at the Google Gemini DeepMind team on the DeepThink project. He was the inventor of Raptor, which is currently the state -of -the -art technique in retrieval augmented generation based on which all retrieval augmented generation pipelines today operate on. I’d love for Parth to speak on what it takes to build sovereign frontier models and the differences that he has seen building these models out in the West versus what it takes to build these models from India.

Thank you.

Parth Sarthi

Thank you. India under the India mission has 38 ,000 I think scaling to more than 50 ,000 GPUs which is so much more than you know what we had a year ago two years ago thanks to the India mission and I’m sure the scaling up will continue have many more GPUs now but if you look at the West you know their companies with much more GPUs with deals for many Blackwell and ruin chips coming in right so I was at Google DeepMind I worked there on Gemini deep think the reasoning capabilities of one of the most research resource which yeah labs in the world and and that this number is of GPUs is going to go up but why does any of this matter right why is there a GPU race after all why can’t we just write better algorithms and make better models so the answer to this is in my opinion one of the most important empirical discoveries in the history of computer science is scaling laws so the GPT papers were impressive you know GPT -2 could write paragraphs GPT -3 could write essays they were really good work but the GPT papers were the tinder.

The match, the thing that actually started this whole AI revolution and lived in the entire industry were scaling laws. So in January of 2020 Jared Camplin and some colleagues at OpenAI including Dario who went down to start Anthropic published a paper called Scaling Laws for Neural Language Models and what they found was really simple. So if you take a neural network’s loss, its error rate how wrong it is against the amount of compute used to train on it on a log -log scale you basically get a straight line. A very clean smooth power law. A straight line that spans 7 orders of magnitude. What that means in really simple languages, every time you 10x your compute, your model gets measurably, predictably better.

Not randomly, not sometimes, every single time. The exponent they found was roughly 0 .07 so which means for every doubling of the parameters you see the loss drop by 5%. This sounds small but at a log loss scale across many many doublings if these, you know, these gains compound enormously. GPT -2 to 3 was a 100x increase, 3 to 4 was another 100x, and each jump, you know, produced a leap in capability, right? And then the DeepMind’s Chinchilla paper, which corrected it, said you need to roughly scale your data and compute equally. So the reason why this was so consequential was that, you know, this turned intelligence into an engineering problem, right? Not a science problem. You don’t really need a breakthrough.

You need, you know, more GPUs, more data, more electricity. You need money and whoever has the most money, right? So you could call the race right there. You know, if the scaling laws hold, they have held for five orders of magnitude. So then there’s a spending competition, right? And this was the dark picture. A lot of people would ask me, you know, why are you leaving DeepMind? I come back to India to build against this kind of backdrop. And the reason is this. So about a year and a half ago, something changed. We had the reasoning revolution that hit. You know, there was the O1 model, the O3 models, and they showed there was a difference.

So there was a different way to actually make these models smarter. So, you know, this word reasoning gets thrown out a lot. Let me explain what it is in some simple language. In the old paradigm, you would pre -train these models by making them bigger and training it on more data, which is pre -training. And, you know, the models will see the strillions of text, and at inference time, they would just generate it really fast by, you know, just one at a time with no ability to sort of correct for its mistakes. And these reasoning models, they started working differently. They could, you could give it a problem, a math problem, a coding challenge, a logic puzzle, and you could let it think.

So it would generate a long chain of thought. It would think for a bit, and then it would try an approach, maybe, you know, backtrack to a different approach and eventually, you know, reach a final answer. So this result was a new scaling law and where you could actually, you know, spend more RL training compute. And now we’re even seeing that this new type of RL compute is actually even exceeding the amount of compute spent during pre -training. so this was a reset and if you look at, and let me explain why so if you look at RL training, right the majority of your compute is not actually in gradient impedance it’s not actually in the training, it’s in this models trying different things out in different rollouts, and this is basically inference, and this is this doesn’t really need to happen on your you know, top of the line 100 ,000 GPUs in one building with NVLink and InfiniBand this RL inference, you know the sampling can be synchronous, you can generate asynchronous, so you can generate rollouts on one set of machines, collect them you can make them distributed and so on you can make them run on older GPUs on, across multiple locations and now we have hundreds of, you know, techniques coming out every day to make this work, right and just doing RL is one step, the other part, and this is the, I think the main thing why I do think, you know, India will succeed is environments, RL environments are where majority of the training happen, you know, a math environment has math problems a coding environment has coding problems where the math the model tries, gets feedback, and improves.

And the key observation is that these environments, you know, it can scale with humans and CPUs and not necessarily GPUs. And GPUs are important, but they’re not the most important thing, right? So building a math environment requires mathematicians. Building a coding environment requires software engineers. Building a medical environment, you know, could require doctors defining clinical scenarios. And this is human expertise, right? It scales with people and ordinary compute, which we have a lot of in this country, right? So this is the bet I made. You know, India has 1 .4 billion people. We have domain experts in every field, medicine, law, agriculture, finance, education. We can work in so many languages. We can build environments for problems that a lot of labs in the West don’t even know exist, like agricultural loan assessment in Tamil, legal aid reasoning in Hindi, and so on.

You know, these are problems that affect hundreds of millions of people. Then we can build RL environments for them that don’t exist anywhere in the world, right? But with the, you know, India emission grant, we have a lot of compute to actually build this frontier if we’re smart, smarter about these environments, right? And if you look at India, India is a voice -first country. And that’s why at Charity, we’re building a voice -native speech reasoning model, right? Reasoning over speech for all the reasons I just described are in -train, environment -driven, and in print scale. So I think, you know, the race to AGI sort of has begun. We have the right environments, the right algorithms, the right focus, and this distributed setup.

Now, I think, with the support of a mission that’s already scaling up so many GPUs, I think we can go ahead and

Suvrat Bhoosha

Thank you so much, Parth, for sharing what you think is the roadmap for building intelligence from India. With this, our distinguished speakers now assembled. I think let’s dive straight into the panel discussion. Thank you. Okay So I have a few set of questions that I’ve prepared for all our panelists but people please feel free to interrupt and if somebody can go around with a mic asking questions please do so So I’ll ask my first question to Tarun I think we’ve all spoken about large GPU clusters of how they’re growing in size I would love to understand your perspective of where you see India’s compute requirements are today, where do you forecast them going to be and where do you think the demand for the same is coming from?

Tarun Dua

Sure So a number of things So like if we just look at the compute requirements of say top 15 or 1000 So If we just look at the compute requirements of say top 1500 or 2000 or 2500 or even 5000 organisations, so are there enough teams that can utilise say 16 to 128 GPUs? Just looking at top 1000 organisations and say that like do they need at least 128 new GPUs every year? I think the answer is most likely yes. More likely the answer is that initially we need 128 GPUs and eventually we are going to use at least 1000 GPUs where there are multiple teams within an organisation trying to solve multiple problems and so it’s not just that GPUs are used only for training and inference, they are also used for data cataloguing, they are also used for like many different types of inference which is like available straight out of the box.

I think it’s a good question. I think it’s a good question. I think it’s a good question. I think it’s a good question. So net net the compute environment required by each of these organizations is going to be of the size of at least 1024 and that’s the representative of like the mid segment and the SME and the higher education and research and like literally there are so many different types of organizations apart from like for -profit companies. So net net if we were just to look at like say thousand organizations wanting 128 GPUs each you’re looking at like India needing at least 128 ,000 GPUs and we are not there yet. So which means that like there is a journey ahead of us in terms of building the infrastructure and having faith and the confidence that yes like India may be lagging maybe 18 months behind the rest of the world but that lag will keep coming down and at some point of time we leapfrog.

Like we did with 5G and 4G. So when that leapfrog happens, those compute requirements would explode even further. So I think it is safe to say that like India is a country with 20 % of world’s population and currently having capacity of processing about 3 % of world’s data will sometime in the future leapfrog to processing not 20 but like maybe 40 -50 % of world’s data by becoming the data center and the compute capital of the world. So those are my thoughts around that.

Suvrat Bhoosha

No, absolutely. Thank you so much, Tarunn, for sharing that. So I think at a bare minimum what you’re saying is like the 128 ,000 GPU infrastructure that we.

Tarun Dua

That’s today’s requirement just in India alone. And we just don’t serve India alone. Like when we build compute infrastructure, we serve the whole world because this is not a super latency sensitive like a website or a CDN kind of an environment. So reasoning models, they think. And when you add another 200 milliseconds to the thinking process, it does not like really kind of like add a whole lot of latency to what the people are experiencing. So in that sense, we can actually serve the compute for the world. So which means that we can build a lot more than what just India needs alone.

Suvrat Bhoosha

Makes sense. And so that’s an excellent segue to my next question which I’ll direct to Mr. Shri Gansham Prasadji. Sir, when we talk about these kinds of compute infrastructure that is needed for the country, how do you forecast like what the energy consumption of modern day data centers would be compared to our overall energy requirements for the country? And like how does our country, for example, be prepared to meet that over a 12, 24, 36 month time horizon?

Shri Ghanshyam Prasad

See, we have already, as I mentioned in my opening remark, we have already factored it right now demand equivalent to 16 gigawatt which we are projecting for the data centers. But the philosophy of planning we have changed. It’s now in India. And we are trying to upgrade our systems and planning systems every year. It has been made dynamic. Earlier you used to hear something like five year plans, right? Those days have gone. So we are upgrading our transmission every six months, that plans. And the resource adequacy plans is being upgraded every year. And even when I was speaking in Singapore where US and all other regulators were there and they said how are you able to really manage this in six months and one year.

So I said it’s a computing environment that has gone in India and we have really cashed up and we are able to do this. And that has really helped. If you see whatever error we make in the planning process or the projections we are able to correct it within no time. And that has led us to do a course correction immediately whenever we have this kind of plan. Second is the growing demand that the country has right now, which is phenomenal. I will say it is much much higher than any other countries across the world most of the European countries you will find that they are growing at either they are stagnant or growing at 1 % or 1 .5 % or 2 % at the most we are growing at 7 to 8 % and some year we have even grown at rate of around 10 % so meeting that kind of demand unless you are resilient and you are able to do it in real time frame probably you will not be able to sustain that kind of a thing and the kind of further expectations that is there with the customers probably you need to have that kind of jump.

Suvrat Bhoosha

And sir I think like one follow up question which I have actually both to you and Tarun is that like these modern data centers like the energy densities are hitting quite high levels right so one thing that we hear is that do we move data centers close to where the energy generation is happening so when we talk about this new upcoming like data center hubs you talked about you know like sort of Mumbai being one of the hubs for where these data centers are being created but like according to my naive opinion there’s a lot of energy production that’s happening in states like Rajasthan like how do you foresee this environment that you see data centers moving close to where the energy hubs are like would you be interested in building like the center close to these regions I would love to hear both of your perspective on the same.

Tarun Dua

so I am really looking forward to like the SMR nuclear reactors being made available as quickly as possible in the data center campuses and see nuclear power is again like I am shilling for nuclear power for no reason so nuclear power is like also very reliable so you can actually run it for like all together for like 8 years 10 years now several advantages to that is like you are not transporting on the grid so you don’t have to pay the transit fee which is very very reasonable in India but like again every cost saved is that savings can be passed on to the end customer and similarly you also don’t need diesel generators to be there on site you can just have a slightly larger battery energy storage systems along with nuclear and you can build a data centers of the future so that is something that i’m really looking forward to but i think like it could be like three to five years away so those are my thoughts about like wherever you are putting data centers you can put the power over there as long as there is availability of sufficient amount of land because nuclear power requires like some free land around that facility and another advantage of nuclear power is that once you have set up like some land for nuclear power you can like modularly increase the size so let’s say you start with 220 megawatts then you can add like in chunks of 220 megawatts which is the most dominant design of the smr or like even the bsr designs that are there so that’s what i think about it.

Shri Ghanshyam Prasad

I think what Tarun said is very right, but the visibility that I see in SMR may not be 3 to 4 years. So maybe slightly longer period, I am not very sure about it. And because I have been talking to most of the people who are going to be into the business of nuclear, because so far we have only NPCIL, Nuclear Power Corporation of India Limited. But all others are also slightly apprehensive, that probably that may take slightly longer time. But again, what is required, what he rightly said is you will be requiring a containment zone. And that containment zone vary anywhere between 1 kilometer to 5 kilometers, depending on the capacity that you are going to have in the nuclear space.

That means again you will be moving away from the main crowded places, right? Because you require a containment zone wherein no habitations are allowed. second is you rightly mentioned that we are trying to say that you should go to as close to the resource center as possible because you need green power if you really need green power then you should have that kind of a closest because if you take let’s say if your target is somewhere from Rajasthan or Gujarat we require a huge amount of transmission lines and we are trying to optimize on the transmission system itself so let it be at the generation place and Maharashtra I mean good thing for India is we have 8 to 9 states which are very rich in renewables starting from Gujarat Rajasthan, Maharashtra, Karnataka, Telangana, Andhra I mean so all these and so you have multiple choices it’s not that you have only one choice where you need to put it similarly if you see the IT hubs which is getting created they are also scattered around the country so that and last point I said is we are trying to have multiple landing points again so the moment you have a multiple landing for example for Singapore Vizag or Paradeep or Gopalpur could be another choice so we are looking for an alternative and Singapore probably is likely to be connected with Vizag so similarly for the western side as well so you need to have the diversity of this and that is how you will be able to successfully meet your demand

Suvrat Bhoosha

No, makes sense. Thank you so much for sharing those points of view I will move on sir to Professor Jayadeva when we talk about the talent pipeline for the country we would love to hear your perspective sir on what you think is sort of the undergraduate readiness of our workforce for training and deploying these AI workloads and what is your point of view on a lot of people in our country moving abroad to do higher education or moving abroad to do higher for better work opportunities compared to sort of the PhD education system in the country thank you like what would your perspective be on you know empowering more of our children to sort of continue PhD opportunities to continue grad school opportunities in India versus sort of doing that in other institutions around the world ?

Professor Jayadeva

Were actually employees of a firm working full time in the department. This company had stationed them in the department and said, well, work for your PhD, but you have to work on areas or these problems that are relevant. They were, of course, discussing with many other students in the department who were also in that lab. And then, of course, there were professors part of that team. That kind of success story is, you know, I would say rare. And if one finds a way to replicate those examples in numbers, I think the story will change dramatically. It takes a leap of faith. Most HR managers are averse to letting their employees work full time at a university.

Well, if you’re working there, you know, you’re not on site and therefore you’re on some kind of leap. In this case, we created a way so that they could logon. So VPN and work as if they were on site. So it’s kind of. site for themselves. The other problem of course is people have to join research careers early. They have to take that plunge early on. That’s when they are most productive. That’s when they can churn out new ideas quickly. And I think while the government is doing a great deal to make that happen I think we need more examples from the industry trying to do that, trying to bridge that gap. So if that happens in my view, the story will dramatically change.

Suvrat Bhoosha

How do students today look at PhD as a career path right out of college outside of the other opportunities they may have?

Professor Jayadeva

So the duration of PhD is the primary deterrent. It’s 5 years. And so there’s a social deterrent as well. I have heard from students you know when they get back home PhD student some neighbor will make a comment well you are still studying is it because you are still at college still at university haven’t got out aren’t in a job so it’s you know that mindset will change in my view only if one you get paid more I mean if they are actually employees let’s say working that changes the fellowships I think are far more lucrative and that can only happen with industry help in my view and but there is a there is a via media there is a path in between we have something called MS research which is like a research degree that takes about 2 years plus numbers there have actually tripled in the last 3 to 4 years so number of PhD enrollments I would say is now static it dropped after COVID but in this MS research degree those numbers have actually and you can get a job and you can get a job and you can get a job rippled in the last 3 years I am saying for our department so I think we have to find you know we have to really brainstorm I think that that dialogue hasn’t happened in sufficient measure to be able to answer your question.

If

Shri Ghanshyam Prasad

f I may supplement I think professor is saying what is the practical case but government is slightly thinking in a different manner now and you must have heard about ANRF that is Anusandan foundation that has been created with an outlay of 1 lakh crore rupees and this is going to be across the country across all the segments all the sectors which will be almost under the principal scientific advisor of the country very recently we also had a meeting with him and you And very recently you must have seen that we have got something like 20 ,000 crores under CCUS, carbon capture and utilization in storage sequestration. So these are some of the projects which are now being identified.

What are the gaps that India has in terms of technological things which other countries have or can we surpass them? So with that objective, this fund has been created and it is likely that the industry and this kind of an organization and even what we are thinking in the power sector is can we have a university or maybe a cluster of such this thing. Already one has been experimented in Gandhinagar which is doing a good job. That all those people who are trying to do something. Can they do some kind of innovations? Can they be supported through some kind of a fund? And then… the industry takes over. So the gap that earlier used to be there that a PhD he does a paper or a professor he does a paper or even his promotion is linked to the paper publication.

So that kind of a situation will have to be slightly modified and you need to really take whatever you do, whatever PhD that you do or whatever research you do, it has to be taken forward from there so that what we are thinking is that it’s a concept to commercialization. So you have to take it to that level and then only it has to flow. Very recently we also had a good competition of AI use case in power sector. I think only two months back and we have identified few companies who are really trying to have that kind of an ideas and we have already assigned them some tasks that okay you do it on a nomination basis.

So that’s the kind of you find that. So there are a lot of good changes. that is that the change in mindset of the government and trying to support this kind of activities that is going to happen.

Tarun Dua

Sir I would like to add something over here. So these are great ideas that like research should be promoted and supported in India. Now academia does a very good job of identifying pure problems which need to be solved which advances the human knowledge. We in industry see the build versus buy decisions like almost every day. And also we kind of like look at all the road maps of okay what needs to be done and what amount of time. So give you a few examples like basically like if you just look at say things like optoelectronic networks co -packaging of optics with electronics. So those kind of problems are very well known. So to go from 100 Gbps to 1 .6 Gbps there is a certain time frame in which it has to be done.

And at a certain volume of production that it has to be done. So, which means that resources have to be deployed in a manner that it produces goal directed research in a certain time frame. So, what is considered as like a good outcome is something that we in industry can help define but most of the time we don’t always have the kind of money to deploy behind those goal directed research and also we do not have our own use cases for kind of like selling out that much to be able to support that volume of research. So, that’s my suggestion that task people like us who make build versus buy decisions to at least create the roadmaps that okay this would be good to have if we can do it in this much time frame.

If we don’t do it in this much time frame somebody else in the world will go and do it. So, that is something we can help with.

Shri Ghanshyam Prasad

Yeah, absolutely. And this is the basic idea. This is the basic idea with which we are trying to have this. Just I’ll give you one example. In fact, we are facing huge amount of challenge in research. It’s what you see right now in the country. we have only two companies in the world and they are really taking us on ride in terms of supply chain in terms of prices etc etc then we said ok nothing doing let’s can we have our own industry coming up in India so we have now lined up L &T and Power Grid Corporation of India both of them are contributing 300 crores each to go in for that the gaps so you will find this kind of situation we have already tried to identify something like 76 elements in the power sector which needs immediate attention so you need to go aggressive now on this kind of thing similarly other sectors as well IT sectors, METI is trying to do in mining areas, in critical mineral area so you will find all the ministries have now waken up to take up this kind of a challenge.

Thank you.

Professor Jayadeva

I just wanted to react in a different way to some of these comments so it’s not always that you know research needs to be abnegated issue. A lot of research is applied, a lot of research that happens within universities, IIT and so on. A lot of it actually is with industry. But more often than not, the industry funded projects tend to be kind of at arm’s length. It’s like kind of saying, look, here is the problem and if you can find a solution. Sir, we need both types of research. If we only do goal directed, we will never innovate really well. No, I just wanted to say it differently. So, the point I was trying to emphasize is not about either necessarily short term or long term or medium term.

You need to have a mix of all three. Certainly new ideas come forth at all possible levels. Okay. The difference that you know eventually an idea makes is well ideas don’t make money companies make money or you know organizations make money so the key is translation it is difficult to create an ecosystem within a university that’s efficient and let’s say I would just say efficient at translation on the industry side translation is much simpler they’re geared up for production as an example if you ask a student to write production level code it’s not going to happen it’s not feasible and that’s why one has to rethink the nature of this partnership it’s not about funding it’s about trying to work on these problems together like I gave you an example what happens or used to happen say at Berkeley Cadence labs Cadence and you know set up a lab at Berkeley and they had researchers from both sides working together and they had researchers from both sides working together and they had researchers from both sides working together and they had researchers from both sides working together and they had place.

Now it might be a new idea comes, you know you come across a new idea, might be something that is groundbreaking, will take time to scale and you want to look at that separately. There are problems that would give an edge, would give an edge to a company today and they need to be solved in the next six months. Those are also problems that people need to work on and look at. And sometimes there are things that come simply out of the discussion, something a company has been doing for the last ten years, turns out as a far more efficient way that you could deploy in the next six months. So all three happen. Right now I would say the dialogue is at arm’s length.

And that if it changes, I would say funding is less the key than really that, you know, making that dialogue happen because when that starts happening, you will also see excited students wanting to say, look, I know that I will find a career. That doesn’t take any money from the government. It’s fully sponsored supported by industry or sponsored by us. It’s at IT Delhi. It’s called the VLSI Design Tools and Technology Program. It was started in 1996. And till today all the students are sponsored either by projects or by industry. And many of them have led to patents and other things going on. Two of the gold medalists of that program decided to forego all their placement offers.

They had like three or four offers in hand off campus, on campus. And these gold medalists decided to stay back and continue a PhD because they realized all these companies want them. They are really good at what they do. They will get a job and they wanted to see that chip come out. They wanted to see that develop. They wanted to test it out and see the outcome of that. That level of excitement really happens when these are live projects. with involvement from the industry or whoever else, it could be even a government entity, public sector but you need the end users enmeshed with problem discussion and solutions.

Suvrat Bhoosha

No, absolutely. Thank you so much for sharing that sir and everybody. I’d like to invite Parth to share a personal story something that sir just said about people who move to the US don’t often come back and then also on the same side that you know like while you were studying you decided to sort of take a break and sort of join Google DeepMind part time. So what was that thought process like? It was very similar to what sir described as a project, as the passion of working on a life project and what was sort of your reasons for moving back?

Parth Sarthi

Thanks Avrith. I think I think the thought process there was you know I was doing my undergrad and my masters and at some point I wanted to go on also and do a PhD and perhaps be in academia that was definitely one of the considerations I had because I got into research pretty early on even in my undergrad career. I think the excitement around AI and sort of showing that even a lot of PhDs and professors at my university were going on and then building out companies and showing that this research that’s been done for so many years now is actually starting in the 80s but now is actually paying off these dividends and leading to this new technological revolution as I think Professor actually said sometimes a lot of these ideas take a while to actually materialize and we were seeing that materialization sort of happen in the Bay Area there and then so that was at that time you know again AI is one thing that required a lot of computes a lot of these big industrial labs had that compute which you know universities had some of it but didn’t at this scale and you know as I said scaling laws were happening so you wanted that scale that was my reason to sort of be a deep mind to see that scale and then but really I mean we need that sort of same infrastructure in India and we need the same research and people in India so that was what sort of drove me back to here because now with the mission support we have similar compute in India and actually we were seeing that you know these scaling laws show you can scale up but you know there are new innovations that India sort of needs and there are I mean there are so many smart people here so now that we have the compute we have the people for me just made a lot of sense to be back here and you know build the same thing from India.

Suvrat Bhoosha

No thanks for sharing that Parth. So I’ll open the floor for questions. There are mics here if people in the audience want to ask. Hi,

Audience

My name is Pradeep Subramaniam. I come from the physical world. So, AI, I have been recently building an agentic AI, but I come from the physical world, R &D, technology, etc. So, my question is to Ganshyamji and to Parth, actually. So, if you build any infrastructure, the physical layer, right, in terms of IoT sensors and the one which is collecting data is the most important part. So, what I was finding in this whole discussion was data centers, infrastructure, but nobody talked about the IoT part or the physical collection of data, right? For example, the electricity plants that you have, whether it is at the power generation, at the distribution, transmission. hardly any IoT based systems or SCADA legacy systems, right?

They are not connected end to end in terms of building a digital twin of this electric system, right? We built something like this for the Haryana government, but it’s not scaled to the full extent, right? So where is the role of India building the ecosystem for the physical layer, which can generate so much amount of data, which can help build this AGI, right? So while infrastructure is good, how do we create? China does that, right? China has used cases which are full of physical layers, which are there. We in India tend to, for example, UPI we build, it did not require much of a physical layer, so we could easily build, right? I think the catch is building the physical layer.

What are we doing for that? For example, in your area, sir.

Shri Ghanshyam Prasad

Yeah, thank you. Thank you for raising this particular. concerns of the industry and this is definitely an issue and let me be honest on this. We have very good infrastructure particularly coming from the generation side and till transmission and going up to the low dispatch centers. Till that absolutely we are at par with the world but when it comes to the actual concentration and link with the customers that means the distribution and the customer link probably we are still lacking behind. So that is the physical and this is the practical situation wherein we are at present and you all must be hearing about the issues of the distribution licenses and their financial viability. So until this they are financially viable probably they will not kick in into the area of automation.

Audience

My question is why is the government not supporting to help create this data?

Shri Ghanshyam Prasad

I am coming there only. So we realize this right It’s not that we didn’t realize in the government That this particular segment Of the entire value chain of the power sector Requires some kind of a support We had been supporting this particular segment Earlier as well And in the recent one it is the RDSS Program that has kicked in And this is This is a program which is reform link program So if you are able to Achieve certain goals you will be given the money Or else you will not be given the money And this is supporting in two ways Two very very important ways One is the infrastructure that is required For ensuring reliability of supply And second is the automation systems That means We need smart meters Until this you have a communicable meters You will not be able to do that kind of a smartness Into the entire value chain Of the product So as I said we had this missing link We had come up to the State load dispatch centers But going from the state load dispatch Centers and connecting with the Customers you needed this kind of a smartness and that is how we introduce smart meters and it has rolled out and I think so far more than around 3 crores of meters have already been installed in the country with 25 crores already in the pipeline.

So hopefully we will be able to reach this kind of a number in next say 2 years time frame or maybe in 3 years time frame. What that it leads to? Then it leads to the SCADA system being developed in this particular segment as well. Isn’t it? And right now we do have the SCADA system but it is coming from the other side of the fence. So we have shortlisted a few companies and we are trying to work with them so that we have our own indigenous SCADA systems which is supporting the entire value chain. You all are knowing about the cyber security concerns and we do in a similar manner. And so we want this kind of things to be developed in India.

as well. Now what does this mean? The moment you have the automation in this particular segment, use amount of data is long going to be generated. How do you use this data? So that is how I said that we already had one round of discussion with the startups and some of the AI and driven companies and let me tell you their enthusiasm level and they say, sir give me one year time frame I am going to map all your assets across the country. I mean that is the kind of enthusiasm in these youngsters and we really salute this particular group and that is how my distribution team in ministry, they are working with this kind of people and so that how quickly we are able to take their supports, map them and try to really go further.

Further what, whenever you have this data, then the data has not only to be used only for the billing purpose, right? It has to be used for your planning, planning of network, planning of optimization of resources. I mean you can define any number of use cases the moment you have all this. So this is in pipeline. I’m really thankful to you for reminding me this. And we are trying to.

Audience

So my point was that, for example, geo tagging of all the assets of your, you know, right from the power generation to the end point to the consumer. It’s not done end to end today. Right. It’s also a security risk for the country. If some other, you know, server is hosting all that data, it should all be hosted in India in the data centers, all every platform at the back end, including the LLM, which is managing that should be completely in India. Right. So I’m saying that the end to end deployment of AGI will happen only when we have the real physical layer generating enormous amount of secure data, which is not hosted in outside India and lying within the sovereignty.

Data centers of India. I mean, that’s the kind of thought that government needs to think. then we can become so that’s why I wanted my second question to Parth that what are we doing to build that kind of data which will help us set up the AGI part right so AGI doesn’t come simply from some small use case right you need trillions and trillions of tokens and data for that right and you need a domain expertise and knowledge to build that how do we do that what’s the question?

Shri Ghanshyam Prasad

before part takes in differently these youngsters have written insights than me but this gap that you rightly just now mentioned about the data being hosted elsewhere in fact this we came to know the moment we started rolling out these smart meters in our systems and we found that the suppliers are having their resources somewhere going out you immediately we took that action and we said that nothing doing all the data has to be housed in the country itself so right now whatever smart meters that we are placing in the country their data doesn’t go out and it has to be in India so wherever we are able to plug I think we are trying to do that and trying to create that physical layer so that we are cyber secure that is very very important for the power sector.

Parth I think you will take over now.

Parth Sarthi

Thank you sir I think just to echo some of sir’s thought I think I mean a lot of work is actually being done on this layer right so if you look at data sets we need indigenous data sets you have AI kosh by the India mission which is solving for this right Indian data sets for Indian companies to build these frontier models if you look at compute as you said you need you know we need compute in India so if you look at the budget policy we have this data centers you have till 2047 tax people so you will see a lot of these data centers come in there are already a lot of data centers being built you know we have you know Tarunji who is building E2E right see for India infrastructure so that this compute you know the frontier models of India can be can be hosted in India and it can all be done on local compute.

So I think the GPU infrastructure that is being supported by the India mission is actually solving for the exact case that you’re seeing. And already over the last two years, you’ve scaled up our GPU so much for this. So I do think a lot of work has already been done and this work is just going to continue to solve for this.

Tarun Dua

Sir, I would like to kind of like take a stab at trying to answer your question. I think we are still having some gaps in terms of being able to harness the impatience of the youth to build physical stuff. So unlike software, the physical stuff actually costs money and the cycle time today is very high. So you need to be able to reach the nearest 3D printer to be able to prototype. You need to be able to kind of like design the chips. You need to be able to solve for all the physics problems. I think what the LLMs will do for us and the frontier models will do for us. is to reduce the cycle time of the thinking part like say you have to do the actual physical world calculations you have to do the digital twin part that part is used to take a lot of time that gets solved faster but what we still need to solve is that we need to do the prototyping that is the part we still need to solve but i think like having spoken to a few companies who do who used to do physical prototyping they have done away with a lot of physical prototyping all together and they are just doing it on top of the digital twins now so so i think somewhere we will converge so that’s my hope.

Audience

o I am saying that imagine the next upi innovation is say the agentic ai for the vending machines i am working on that but physical layer right of the sensors which pull the data for a vending machine the back end of the vending machine is the back end of the that hardly any vending machine is connected to iot doesn’t have any physical layer. It is just used like a dabba, right? So the point I am saying is that why the government is not enabling the instruments which help the connectivity of this data to the AI and the data centers and then the intelligence can be built to automate, to create more jobs and you know, it’s very counter intuitive.

We say that we will build agentic AI, we are going to reduce people. No. Actually the work is going to increase because the vending machine infrastructure will go 10 times or 20 times it will become like Japan, right? You will have more vending machines. But I do not think that that kind of an infrastructure, private industries, I am from the private industry, I cannot build it. For me, day to day running a vending machine business is I get cheap labor. I cannot use sensors, right? So this is the catch -20 -t kind of situation where most of the infrastructure that we have in India, we have cheap labor, we still manage with that. We cannot take the next leap.

How do we take the next leap by getting the platforms like UPI to build with physical layers that was the question.

Shri Ghanshyam Prasad

I think some of the answers will be given by the India Energy Stack I think you must have heard about that and I am also a member of that committee and we are deliberating on all these use cases and where that gaps are so definitely I think we will take care of that.

Suvrat Bhoosha

Thank you sir for asking that question I would like to circulate the mic in the audience if other people would like to ask questions people can just raise their hands if they have any I think there is one in the back.

Audience

Hello, good morning to everyone there are three things to develop any industry first is primary sector, second sector and third sector your AI impact summit is always talking about business model what about the management manufacturing sector because if any unit is made like semiconductors are not developed in our country we take Chinese companies we take Chinese companies we take Chinese companies we take Chinese companies we take Chinese companies we take Chinese companies Although the industry is being built now, six units of semiconductor industry are being built. But what about the 9 gigawatt industry which will be built for data centers by 2032, by 2032, what about the manufacturing sector? Until that is not developed, will we keep working on business…

Tarun Dua

All these things are interconnected. So nothing is to the exclusion of another. Whatever sector you are working in, eventually that will feed into the other sectors. So as long as the intent is there to be Indian and by Indian, when there is intent, then automatically all the problems will be solved together. If there is intent that, we will work together to solve all the problems, I will move forward but the rest will stay behind. then we will not move forward. How are we developing that?

Audience

There is no framework. Nothing comes first or later. Everything goes in parallel. Microprocessor units. Microprocessors. Because for AI, the most basic unit is microprocessor. And for data centers, the most micro unit is microprocessor. So what about microprocessor? Will we keep buying from China? No doubt that in 2025, six units are being made in India and all. But so far, there is no prominent result.

Shri Ghanshyam Prasad

It takes a little time to reach that level.

Tarun Dua

Sir, actually, if you look at microprocessors, either mobile phone or server or desktop, on the whole motherboard, there are about a couple of hundred pieces of intellectual property. Now, all that intellectual property, if you look at it, a lot of intellectual property is made in India. It is made by the people of India. So licensing India ke through nahi hoti Because the IP is getting developed with foreign money Toh yaha pe hum R &D karte hain, IP develop karte hain Koi usko aur commercialize karte hain Toh I think that gap has to be fulfilled By having volumes which are domestically available So jab domestically volume available hoga Toh jo system on chip IP jo develop kiya gaya Usko add karne ke liye jo log kaam kar rahe hain They will see a domestic market So uske baad ye saari cheezme automatically honi shuru ho jayengi Toh like I think jo bahuti important part abhi ho raha hai Is to move the country forward Build a large market which is interconnected with the world Once you have large markets interconnected with the world Then youngsters who are very impatient to go and build things And say that okay isko commercialize kia ja sakte They will go and achieve the success So like I said Kuch bhi can’t be serialized.

You can’t say that we will do this first. We will go back to the planning era. The communist states who used to plan first we build fundamentals then we will build something else on top of it. So free market allows you to work on all these things in parallel and it throws up the opportunities. So if we fix our economics, all these things will be fixed on their own. Thank you.

Shri Ghanshyam Prasad

I will say one thing. Basically we are moving gradually in the manufacturing sector. If we talk about the power sector, I don’t have much information about METI and other areas. Here a lot of equipment is almost 100 % indigenous. There are certain which is ranging from 50 % to 80%. They are being targeted to see that the domestic content of that equipment also goes to 100 % in a given time frame. There are still equipments which are yet to take off. Those which are 78, 76, which we were telling you about. . We are trying to reach almost 20 % to 100%. So there are different stages of indigenization. But definitely we are targeting that all these equipments must be manufactured in India.

Audience

It is same that in primary sector, there will be a lot of silicon.

Shri Ghanshyam Prasad

Absolutely. In power sector, we use a lot of electronics. For example, I gave you a small example of IGBT. IGBT is again an electronic equipment, which is right now we are taking from outside. We had the challenge here and we said nothing doing. Now there are Indian companies are going to manufacture this. So now we have given them the task for two years and they need to develop in two years and commercialize. So similarly, we are taking it up. Thank you.

Suvrat Bhoosha

Professor wanted to make a comment.

Professor Jayadeva

So let me divide up that answer into multiple parts. So, the word microprocessor of course is no longer used, right, we do not really talk about microprocessors, the most of the current AIML all runs on GPUs, the architecture is very different from traditional microprocessors. So there is for at least space and some of the other sectors, we have a fairly successful, you know, operation running at semiconductor complex limited Chandigarh, right, Mohali. So the plant at SCL produces some earlier generation microprocessors and produces chips for a variety of other things. There are of course similar other entities around the country, but most of the effort of what you are putting in the VLSI design space, the chip design space, so to speak, most of it is design.

The manufacturing, most countries, I am not saying only India, most countries, India, countries, Europe, US, in fact many of the earlier semiconductor manufacturing plants in the US shut down and are now producing solar panels, right. So the most of the efforts around the world are fabulous design houses and India leads in that. So if you look at, you know, Bangalore, Pune, Hyderabad and to some extent some in Noida, Canada, very significant fraction of designs done for many of the smartphones are actually done within the country almost 100%. In fact some are actually really 100%, complete design. That design is a major component of the cost of developing a new design. It’s actually the manufacturing is there but most of the cost is really in the initial stage really the manufacturing, sorry the design cost.

That’s happening in the country. Scaling up the, you know, the semiconductor design plant itself will take time but you can see it’s already rapidly happening. In the case of memories flash memories and so on there’s already very large investment that’s happened by a fairly prominent multinational in Gujarat and elsewhere that’s already taken off very well. There are similar efforts that probably you will start hearing about their outcomes and outputs in the next 2 or 3 years or even less. So I think as far as the you know that space is concerned the Indian engineers have it almost entirely covered so I don’t think that’s a cause for worry. I think the interlinking of these parts will happen if you ask me organically because everything exists in one.

Thank you.

Suvrat Bhoosha

With that, I would like to thank all of our panelists for spending so much time and answering everyone’s questions. I would like to thank the organizers for letting us go 30 minutes over the time. And thank you so much. I’d like to invite the Indian Air Mission delegates to facilitate the panelists. Thank you so much, everybody. Thank you. Thank you to Suvarath for all the moderation as well. Thank you. Thank you, folks. Thank you, everybody. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“India projects a total data‑centre load of roughly 16 GW across the country.”

The speaker explicitly stated a projected demand of 16 GW for data-centres, which is confirmed by the transcript excerpt noting the same figure [S8].

Confirmedhigh

“India is planning electricity interconnections to the UAE and Saudi Arabia via under‑sea cables.”

A news brief reports that India intends to connect its grid to the United Arab Emirates and Saudi Arabia through under-sea cables, matching the report’s claim [S71].

Additional Contextmedium

“India added about 30 GW of renewable capacity in the previous year and roughly 40 GW so far in the current fiscal year (April‑January).”

Other sources state that India is adding around 50 GW of renewable capacity each year, providing a broader context for the yearly additions mentioned in the report [S28].

Additional Contextmedium

“A forthcoming data‑centre hub of at least 1 GW will need 1.7–2× oversizing of its power contract.”

Industry commentary notes that designers are already working on gigawatt-scale AI data-centres in India, underscoring the relevance of a ≥1 GW hub, though it does not specify the oversizing factor [S26].

Additional Contextlow

“India’s overall power availability makes it an attractive location for large‑scale AI infrastructure.”

A keynote highlights India’s substantial power availability as a key advantage for AI-focused data-centre investments, supporting the report’s emphasis on energy as a pillar [S67].

Additional Contextlow

“AI‑driven workloads will cause data‑centre electricity consumption to rise dramatically, with a 165 % increase projected by 2030 compared with 2023.”

Financial analysis predicts a 165 % rise in data-centre electricity use by 2030 due to AI, adding quantitative context to the discussion of future demand spikes [S52].

External Sources (73)
S1
Indias Roadmap to an AGI-Enabled Future — No thanks for sharing that Parth. So I’ll open the floor for questions. There are mics here if people in the audience wa…
S2
Indias Roadmap to an AGI-Enabled Future — -Parth Sarthi: Co-founder at Chariot, former Stanford PhD student and professor, former Google Gemini DeepMind team memb…
S3
Indias Roadmap to an AGI-Enabled Future — And finally my co -founder at Chariot, Mr. Parth Sarthi. To build this ecosystem from ground up starting with the very p…
S4
Indias Roadmap to an AGI-Enabled Future — Makes sense. And so that’s an excellent segue to my next question which I’ll direct to Mr. Shri Gansham Prasadji. Sir, w…
S5
https://app.faicon.ai/ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — And finally my co -founder at Chariot, Mr. Parth Sarthi. To build this ecosystem from ground up starting with the very p…
S6
Indias Roadmap to an AGI-Enabled Future — -Tarun Dua: Founder and Managing Director of E2E Networks, focused on building enterprise-grade cloud infrastructure and…
S7
Indias Roadmap to an AGI-Enabled Future — Researchers, founders and policy makers. At Chariot, we are proud to be one of the companies mandated to build frontier …
S8
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — We have brought together the absolute pioneers of this field. Before we begin, let me quickly share our roadmap for the …
S9
Indias Roadmap to an AGI-Enabled Future — Researchers, founders and policy makers. At Chariot, we are proud to be one of the companies mandated to build frontier …
S10
Indias Roadmap to an AGI-Enabled Future — Thank you so much, Tarun, for sharing how E2E got started and the vision behind starting E2E networks. But raw compute a…
S11
Indias Roadmap to an AGI-Enabled Future — -Suvrat Bhoosha: Co-founder at Chariot, moderator of the session on “India’s Path to an AGI-Enabling Ecosystem”
S12
Indias Roadmap to an AGI-Enabled Future — Agreed with:Suvrat Bhoosha, Shri Ghanshyam Prasad, Professor Jayadeva, Parth Sarthi — India needs comprehensive ecosyste…
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
Leveraging the postal network for a sustainable and inclusive deployment of digital infrastructure and services (UPU) — Efforts to address this issue include discussions to establish 300 H data centers across the country and the availabilit…
S17
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Thank you, Ashish. You’ve done a fantastic job in a short time period covering the larger macro issues connected with th…
S18
Regional Leaders Discuss AI-Ready Digital Infrastructure — models or talent, how we can ensure that it works in a federated manner. I think I’ll just, I was discussing and maybe I…
S19
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S20
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — A global solution for data privacy enforcement is proposed to address these challenges. Governments worldwide are strugg…
S21
Next Steps for Digital Worlds — Additionally, the adoption of digital worlds/metaverses was accelerated by the pandemic, but there are still challenges …
S22
AI energy demand accelerates while clean power lags — Data centres are driving asharp rise in electricity consumption, putting mounting pressure on power infrastructure that …
S23
How to make AI governance fit for purpose? — Legal and regulatory | Development The speed of AI development creates uncertainty and challenges that exceed current c…
S24
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Africa needs 2.5 million hours of GPU hours a year, 7.5 million for the next three years… Of these 7 million GPU hours…
S25
Keynote-Surya Ganguli — Ganguly explains that AI performance improvement follows predictable but slow power law relationships with increased dat…
S26
From KW to GW Scaling the Infrastructure of the Global AI Economy — The infrastructure demands represent a fundamental shift from traditional data centre design. The speakers noted that wh…
S27
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Energy Infrastructure and Affordability Concerns Unlocking AI’s potential requires accelerated energy infrastructure de…
S28
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — “Friends, as we gather here in a nation racing towards digital sovereignty and sustainable growth, I want to emphasize a…
S29
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy: Thank you, thank you, Valeria, and it’s an honor to be part of this panel. So I think the starting poi…
S30
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Infrastructure and Compute Requirements for Sovereign AI: The panel extensively discussed India’s need for massive GPU i…
S31
Designing the AI Factory Scaling Compute to Sovereign AI — Gandotra praises the Indian government’s AI initiatives, stating that no other country has provided such comprehensive s…
S32
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S33
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S34
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — To ensure successful integration, bridging the gap between academia and industry is essential. Due to the rapid advancem…
S35
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Collaboration between academia and industry is essential for effective decarbonization strategies. An example is provide…
S36
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S37
Indias Roadmap to an AGI-Enabled Future — All three speakers recognize that current industry-academia partnerships are insufficient and need new models for effect…
S38
COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS — From an industrial policy perspective, the European Commission encourages the creation of industry-academia partnerships…
S39
Indias Roadmap to an AGI-Enabled Future — Industry has practical experience with technology roadmaps and can provide universities with clear direction on research…
S40
Driving Indias AI Future Growth Innovation and Impact — Evidence:Current GPU capacity is 40,000-50,000 units against a requirement of 200,000 GPUs. Evidence:Startups can apply…
S41
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S42
AI power demand pushes nuclear energy back into focus — Rising AI-driven electricity demand isstraining power gridsand renewing focus on nuclear energy as a stable, low-carbon …
S43
What is it about AI that we need to regulate? — Launch of the Global CyberPeace Indexsession highlighted structural inequalities, with Marlena Wisniak noting that”the d…
S44
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Sally Radwan:Thank you very much, Robin. Good morning, everyone. And thank you for being here and thank you for inviting…
S45
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The main areas of disagreement center on global governance approaches (centralized vs. diverse), implementation mechanis…
S46
How to make AI governance fit for purpose? — Shan Zhongde: Our chairman, President Xi, has paid great importance to the development of AI. I think this is the founda…
S47
Google to invest in small modular nuclear reactors for AI energy needs — Google has signed the first-ever corporateagreementto source electricity from small modular reactors (SMRs) to power its…
S48
Data center growth at power plants faces regulatory hurdles — The Federal Energy Regulatory Commission (FERC) isexaminingthe rapid growth of energy-intensive data centers being built…
S49
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — AI’s Massive Energy Demands and Infrastructure Challenges: The discussion highlighted that AI data centers are becoming …
S50
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The panel opened with Kumar’s observation that whilst AI models receive significant attention, the underlying infrastruc…
S51
AI energy demand accelerates while clean power lags — Data centres are driving asharp rise in electricity consumption, putting mounting pressure on power infrastructure that …
S52
AI boom drives massive surge in data centre power demand — According to Goldman Sachs, the surge in AI is set totransformglobal energy markets, with data centres expected to consu…
S53
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Infrastructure and Compute Requirements for Sovereign AI: The panel extensively discussed India’s need for massive GPU i…
S54
Designing the AI Factory Scaling Compute to Sovereign AI — Gandotra praises the Indian government’s AI initiatives, stating that no other country has provided such comprehensive s…
S55
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Government’s shared compute framework with 38,000+ GPUs has proven successful and demonstrates a viable model for scalin…
S56
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — 2.Infrastructure capacity- having sovereign compute for advanced models If AI is to become electable in our democracies…
S57
Media and Education for All: Bridging Female Academic Leaders and Society towards Impactful Results — Collaboration between academia and industry is essential for effective knowledge transfer and addressing accessibility c…
S58
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Collaboration between academia and industry is essential for effective decarbonization strategies. An example is provide…
S59
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Summary:All three speakers emphasize that successful semiconductor workforce development requires close collaboration be…
S60
Indias Roadmap to an AGI-Enabled Future — Professor Jayadeva from IIT Delhi addressed the talent pipeline challenges, highlighting the need for better industry-ac…
S61
Indias Roadmap to an AGI-Enabled Future — Physical Infrastructure and Data Sovereignty: Discussion of the critical need for IoT sensors, SCADA systems, and end-to…
S62
Panel Discussion Data Sovereignty India AI Impact Summit — Okay, I’m quickly coming to the third question. I think you had so many things. Supply chain trust, absolutely. Today, i…
S63
Keynote-Jeet Adani — Distinguished global leaders, innovators and friends, good afternoon and namaste. We gather here today at a decisive inf…
S64
Keynote-Jeet Adani — She rises to stabilize, she rises to anchor a world searching for balance and she rises to build systems that are inclus…
S65
WS #139 Internet Resilience Securing a Stronger Supply Chain — Mark Nottingham from CloudFlare provided front-line operational perspectives, explaining that the internet is “inherentl…
S66
Emerging Markets: Resilience, Innovation, and the Future of Global Development — And if you think about infrastructure development, particularly, say, in the US and Europe, and you look at kind of four…
S67
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S68
Hydrogen diplomacy: Fueling the future of global influence — It’s not a simple fuel swap, but a fundamental transformation from politics and technology to the environment and econom…
S69
State of Play: Chips / DAVOS 2025 — Amandeep Singh Gill raised a thought-provoking point about the gap between technological advancement and institutional r…
S70
National Disaster Management Authority — “It’s one thing to set up the observational network”[105]. “And this data, the availability of data is going to increase…
S71
India’s plan to connect electricity system to Saudi Arabia and UAE via undersea cables — According to local media, India intends to connect its electricity system to the Kingdom of Saudi Arabia (KSA) and the U…
S72
Subrata K. Mitra Jivanta Schottli Markus Pauli — India’s international relations hinge upon some core issues, each of which stems from its geographical location…
S73
Negotiating and Implementing MEAs: A Manual for NGOs — For election to UN bodies the Asian block consists of forty-three countries Afghanistan, Bahrain, Bangladesh, Bhutan, Br…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Ghanshyam Prasad
3 arguments157 words per minute4025 words1530 seconds
Argument 1
Power reliability and green supply challenges for large‑scale data centers (Shri Ghanshyam Prasad)
EXPLANATION
Prasad warns that future AI‑driven data centers will impose variable, high‑capacity loads that require far higher reliability than traditional loads. He stresses the need for N+1+1 redundancy and a fully green power mix to meet these demands.
EVIDENCE
He describes how data centers will shift from 10-50 MW facilities to gigawatt-scale plants, citing the 50 MW Adani-Google centre and upcoming 1,000 MW sites, and explains that such loads must be supplied with at least 1.7-2 × capacity, dual power sources, and backup diesel generators to meet N+1+1 reliability standards [34-55]. He adds that India’s renewable capacity has risen sharply, with over 250 GW installed and 40 GW added in ten months, indicating a growing ability to provide green power for these centres [56-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The roadmap specifies N+1+1 reliability, dual power sources and a growing renewable mix for gigawatt-scale data centres [S1]; power-source challenges for data centres are also highlighted in the postal-network study [S16]; rising AI data-centre electricity demand versus clean-power lag is discussed in [S22].
MAJOR DISCUSSION POINT
Ensuring reliable, green electricity for massive AI data centres
Argument 2
Highlight of missing IoT and smart‑meter infrastructure in distribution, hindering end‑to‑end data for AI; push for indigenous SCADA and domestic data hosting (Shri Ghanshyam Prasad)
EXPLANATION
Prasad acknowledges that while generation and transmission infrastructure is strong, the distribution layer lacks IoT sensors and smart meters, preventing a complete digital twin of the power system. He calls for indigenous SCADA systems and ensures that data stays within India for security and AI use.
EVIDENCE
He notes that the physical link to customers-distribution and end-user metering-is still lagging, with smart-meter rollout at about 3 crore units installed and 25 crore in pipeline, and that indigenous SCADA development is being pursued with shortlisted companies [367-384]. He further confirms that all smart-meter data will be stored domestically, emphasizing data sovereignty and cyber-security [400-401].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Smart-meter rollout (3 crore installed, 25 crore pipeline) and indigenous SCADA development are detailed in the roadmap [S1]; the same source stresses that all smart-meter data must stay in India [S1]; the need for end-to-end sensor data for AI is raised in the regional leaders discussion on AI-ready data [S18].
MAJOR DISCUSSION POINT
Building a sensor‑rich, secure distribution network for AI data
Argument 3
Mention of indigenization targets for power‑sector equipment (e.g., IGBTs) as part of broader hardware self‑reliance (Shri Ghanshyam Prasad)
EXPLANATION
Prasad outlines government initiatives to increase domestic content of power‑sector hardware, aiming for 100 % indigenization over time. Specific components such as IGBTs are being assigned to Indian manufacturers with a two‑year development target.
EVIDENCE
He cites the launch of a fund (ANRF) and CCUS projects, and describes a program to push power-sector equipment from current 50-80 % domestic content toward full indigenization, highlighting the IGBT example where Indian firms have been given a two-year mandate to commercialise the device [457-466]. He later reiterates the push for domestic production of IGBTs, noting the task given to Indian companies and the two-year deadline [469-475].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The roadmap lists 76 critical power-sector components with indigenization timelines and a two-year mandate for Indian firms to commercialise IGBTs [S1].
MAJOR DISCUSSION POINT
Domestic production of critical power‑sector components
DISAGREED WITH
Tarun Dua
T
Tarun Dua
4 arguments169 words per minute2019 words714 seconds
Argument 1
Advocacy for co‑locating nuclear SMR power with data centers to ensure stable, low‑cost energy (Tarun Dua)
EXPLANATION
Dua proposes that small modular reactor (SMR) nuclear plants be sited directly at data‑center campuses, providing reliable, low‑cost power without transmission losses. He sees this as a medium‑term solution (3‑5 years) that can be scaled modularly.
EVIDENCE
He explains that SMR reactors can be built near data centres, offering 220 MW modules that can be added in chunks, reducing transit fees and eliminating on-site diesel generators, and estimates a 3-5 year horizon for deployment [238-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Co-locating small modular reactors with data-centre campuses as a low-cost, carbon-free power solution is proposed in the AGI-enabled future roadmap [S2].
MAJOR DISCUSSION POINT
Co‑locating nuclear SMRs with AI data centres
DISAGREED WITH
Shri Ghanshyam Prasad
Argument 2
Projection that India will need at least 128,000 GPUs to meet organizational demand, with rapid growth ahead (Tarun Dua)
EXPLANATION
Dua estimates that if 1,000 organisations each require 128 GPUs, India will need roughly 128,000 GPUs, a figure far beyond current capacity. He predicts a steep increase as compute demand expands across sectors.
EVIDENCE
He calculates the need by multiplying 1,000 organisations by a baseline of 128 GPUs each, arriving at a minimum of 128,000 GPUs required for India, and notes that the country is currently about 18 months behind global compute capacity but will eventually leapfrog [194-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
GPU demand estimates of at least 128 000 GPUs for Indian organisations appear in the roadmap’s compute-infrastructure analysis [S2]; a later sovereign-AI capability report cites a target of 50-60 000 GPUs, underscoring the scale of need [S19].
MAJOR DISCUSSION POINT
Scale of GPU infrastructure required for Indian AI ecosystem
DISAGREED WITH
Parth Sarthi
Argument 3
Suggestion that industry should define research priorities and fund targeted projects to bridge the gap (Tarun Dua)
EXPLANATION
Dua argues that industry must create clear, time‑bound roadmaps for goal‑oriented research and allocate funding accordingly, otherwise other nations will outpace India. He emphasizes the need for industry‑driven priorities rather than purely academic pursuits.
EVIDENCE
He cites examples such as optoelectronic networks and co-packaging, stating that industry can set timelines and funding levels for specific research outcomes, and warns that without such direction, external players will fill the gap [287-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry-academia collaboration with clear, time-bound research roadmaps and dedicated funding is advocated in the roadmap [S2]; similar calls are reflected in the joint statements summary [S1].
MAJOR DISCUSSION POINT
Industry‑led, time‑bound research roadmaps
DISAGREED WITH
Professor Jayadeva
Argument 4
Emphasis on building a domestic micro‑processor ecosystem and reducing reliance on foreign chips for data‑center hardware (Tarun Dua)
EXPLANATION
Dua highlights that while much IP originates in India, lack of a large domestic market hampers mass production of micro‑processors. He calls for scaling domestic volumes to create a viable ecosystem for Indian‑designed chips.
EVIDENCE
He notes that Indian-developed IP is often licensed abroad, and argues that a sizable domestic market would enable Indian companies to commercialise and mass-produce micro-processors, reducing import dependence [447-452].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reducing dependence on imported chips and fostering a domestic micro-processor market are highlighted in the digital-worlds commentary [S21]; the roadmap also notes that Indian-designed IP is often licensed abroad, stressing the need for domestic volume production [S1].
MAJOR DISCUSSION POINT
Creating a homegrown micro‑processor market
P
Parth Sarthi
4 arguments188 words per minute1879 words597 seconds
Argument 1
Emphasis that compute growth drives AI progress, implying energy must keep pace (Parth Sarthi)
EXPLANATION
Sarthi stresses that the rapid scaling of GPUs and compute resources is the primary engine behind AI breakthroughs, and that without matching energy supply the AI race cannot be sustained.
EVIDENCE
He references the scaling laws paper from OpenAI, explaining that a ten-fold increase in compute yields predictable model improvements, and points out that the Indian mission has already expanded GPU capacity dramatically, underscoring the link between compute and AI progress [131-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven data-centre electricity consumption outpacing clean-power growth is discussed in the AI energy demand report [S22]; the impact of compute scaling on AI performance is reviewed in the scaling-laws keynote [S25].
MAJOR DISCUSSION POINT
Compute scaling as the engine of AI advancement
DISAGREED WITH
Tarun Dua
Argument 2
Explanation of scaling laws: each 10× increase in compute yields predictable model improvement, making GPU count the decisive factor (Parth Sarthi)
EXPLANATION
Sarthi describes the empirical discovery that model loss follows a smooth power‑law with compute, meaning every order‑of‑magnitude increase in compute reliably improves model capability, turning AI development into an engineering problem.
EVIDENCE
He recounts the 2020 OpenAI scaling-laws paper, noting a loss exponent of ~0.07, which translates to a 5 % loss reduction per parameter doubling, and cites the Chinchilla correction that data and compute must be balanced, reinforcing the centrality of GPU resources [131-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The predictable performance gains from compute scaling (10× increase) are explained in the Surya Ganguli scaling-laws keynote [S25].
MAJOR DISCUSSION POINT
Predictable performance gains from compute scaling
Argument 3
Personal motivation to return to India because mission‑backed compute and talent enable building frontier models locally (Parth Sarthi)
EXPLANATION
Sarthi explains that his early research exposure, the allure of scaling laws, and the availability of mission‑supported GPU clusters motivated his decision to leave DeepMind and re‑join the Indian ecosystem to build sovereign models.
EVIDENCE
He recounts his academic path, early interest in AI, the desire to work at scale, and the realization that the India mission now provides comparable compute resources, prompting his return to contribute locally [345-352].
MAJOR DISCUSSION POINT
Why top talent is returning to India
Argument 4
Note that the India mission is generating indigenous datasets and local GPU clusters to keep data and models within national boundaries (Parth Sarthi)
EXPLANATION
Sarthi highlights initiatives such as AI Kosh that create Indian‑origin datasets and the rapid scaling of GPU infrastructure under the mission, ensuring that both data and compute remain sovereign.
EVIDENCE
He mentions AI Kosh as a source of Indian datasets for frontier models and points out that the mission has already expanded GPU capacity dramatically, with further growth planned, thereby keeping training and inference domestic [402-405].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The roadmap describes AI-Kosh Indian datasets and rapid expansion of domestic GPU clusters as part of the compute and data-sovereignty strategy [S2]; smart-meter data localisation further reinforces data-sovereignty goals [S1].
MAJOR DISCUSSION POINT
Domestic data and compute for sovereign AI
P
Professor Jayadeva
3 arguments150 words per minute2406 words958 seconds
Argument 1
Need for a critical mass of researchers, clear career paths, and stronger incentives for PhDs to stay in India (Professor Jayadeva)
EXPLANATION
Jayadeva argues that India must develop a large pool of talented researchers by providing attractive career prospects, clear incentives, and better funding for PhD work, otherwise talent will continue to migrate abroad.
EVIDENCE
He discusses the shortage of manpower, the lack of visible career pathways for PhD graduates, the social and financial deterrents (especially for women), and the need for better industry-academia integration to make research careers lucrative [108-118].
MAJOR DISCUSSION POINT
Building a sustainable AI research talent pipeline
Argument 2
Call for industry‑driven, goal‑oriented research roadmaps while also supporting exploratory basic research (Professor Jayadeva)
EXPLANATION
Jayadeva stresses that both applied, goal‑oriented projects and fundamental exploratory research are essential; industry should define problem‑focused roadmaps, but space must remain for curiosity‑driven breakthroughs.
EVIDENCE
He contrasts industry-driven, time-bound research with pure academic inquiry, noting that while industry can set clear deliverables, basic research often yields unexpected innovations, and both modes should coexist [108-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The roadmap calls for industry-driven, time-bound research roadmaps while preserving basic research, mirroring the industry-academia collaboration theme [S2]; the joint statements also echo this balance [S1].
MAJOR DISCUSSION POINT
Balancing applied and basic AI research
DISAGREED WITH
Tarun Dua
Argument 3
Observation that India excels in chip design but must expand domestic manufacturing and increase IP commercialization to reduce import dependence (Professor Jayadeva)
EXPLANATION
Jayadeva points out that India leads in VLSI and chip design, yet manufacturing remains limited; scaling domestic fabs and commercialising IP are needed to achieve self‑reliance in hardware.
EVIDENCE
He describes India’s strong VLSI design ecosystem, the existence of design houses in Bangalore, Pune, Hyderabad, and the limited but growing semiconductor manufacturing capacity at SCL, emphasizing that design costs dominate and that domestic production will expand in the next few years [478-494].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s strong VLSI design ecosystem and the need to scale domestic fabs and commercialise IP are discussed in the roadmap [S1]; reducing foreign-chip dependence is further highlighted in the digital-worlds piece [S21].
MAJOR DISCUSSION POINT
From design excellence to manufacturing self‑sufficiency
S
Suvrat Bhoosha
1 argument60 words per minute1654 words1631 seconds
Argument 1
Assertion that sovereign AI demands ownership of power, hardware, and talent—not just imported resources (Suvrat Bhoosha)
EXPLANATION
Bhoosha contends that building true frontier AI for India requires domestic control over energy, compute hardware, and research talent, rather than relying on imported models or infrastructure.
EVIDENCE
He states that India must own the power, hardware, and talent that drive AI, warning against merely importing models and running them on borrowed infrastructure, and frames this as the thesis of the session [3-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The sovereign-AI ecosystem-control over energy, compute hardware and talent-is a central theme of the AGI-enabled future roadmap [S2] and reinforced in the sovereign AI capability report [S19].
MAJOR DISCUSSION POINT
Sovereign control over AI ecosystem components
A
Audience
1 argument168 words per minute972 words346 seconds
Argument 1
Call for government‑backed creation of a secure, country‑wide physical data layer (IoT sensors, geo‑tagged assets) to feed AGI development (Audience)
EXPLANATION
The audience member stresses that without a comprehensive IoT and sensor network covering generation to consumption, India cannot generate the massive, sovereign data needed for AGI, and urges government action to build this layer securely within national borders.
EVIDENCE
He highlights the lack of end-to-end geo-tagging of assets, the security risk of data hosted abroad, and the need for domestic data centres and LLMs to manage this data, calling for a government-driven, secure physical data infrastructure [393-399].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The regional leaders discussion stresses the need for a nation-wide IoT and geo-tagged asset layer for AI-ready data [S18]; the roadmap’s smart-meter and indigenous SCADA initiatives further underline the requirement for a secure physical data infrastructure [S1].
MAJOR DISCUSSION POINT
National IoT infrastructure for AGI data needs
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Feasibility and timeline of co‑locating nuclear SMR power with AI data centres
Speakers: Tarun Dua, Shri Ghanshyam Prasad
Advocacy for co‑locating nuclear SMR power with data centers to ensure stable, low‑cost energy (Tarun Dua) Mention of indigenization targets for power‑sector equipment (e.g., IGBTs) as part of broader hardware self‑reliance (Shri Ghanshyam Prasad)
Tarun proposes that small modular reactor (SMR) nuclear plants be sited directly at data-centre campuses, offering modular 220 MW blocks that can be added in chunks and would eliminate transmission fees and diesel backup, estimating a 3-5 year deployment horizon [238-246]. Prasad counters that the visibility of SMR deployment may be longer, cites containment-zone requirements that push plants away from populated areas, and stresses that the immediate focus remains on expanding renewable capacity and hydro-pump storage rather than near-term nuclear roll-out [239-246][56-61].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent corporate agreements (e.g., Google partnering with Kairos Power) signal industry interest in using SMRs to power AI workloads, with a target rollout by 2030-2035, while regulators such as the U.S. FERC are scrutinising co-location of data centres at power plants, highlighting policy and regulatory hurdles. The broader AI-driven electricity demand is also reviving nuclear as a low-carbon solution [S47][S48][S42].
Approach to setting research priorities for AI – industry‑driven time‑bound roadmaps vs a balanced academia‑industry model
Speakers: Tarun Dua, Professor Jayadeva
Suggestion that industry should define research priorities and fund targeted projects to bridge the gap (Tarun Dua) Call for industry‑driven, goal‑oriented research roadmaps while also supporting exploratory basic research (Professor Jayadeva)
Tarun argues that industry must create clear, time-bound roadmaps for goal-oriented research and allocate funding, warning that without such direction other nations will outpace India [287-300]. Jayadeva stresses that while industry-driven, problem-focused projects are needed, a parallel stream of basic, curiosity-driven research is essential and that industry-funded projects are often at arm’s length, requiring a mix of short-, medium- and long-term research [308-317]. Both agree research is critical but differ on the balance and governance of the research agenda.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI roadmap calls for new collaboration models with government support to bridge gaps between academia and industry, and the EU industrial policy explicitly encourages coherent industry-academia research roadmaps. These sources underline the policy push for balanced, government-backed partnership frameworks rather than purely industry-driven timelines [S37][S38][S39].
Optimal placement of AI data centres – single‑site nuclear co‑location vs diversified, renewable‑proximate landing points
Speakers: Tarun Dua, Shri Ghanshyam Prasad
Advocacy for co‑locating nuclear SMR power with data centers to ensure stable, low‑cost energy (Tarun Dua) Mention of indigenization targets for power‑sector equipment (e.g., IGBTs) as part of broader hardware self‑reliance (Shri Ghanshyam Prasad)
Tarun pushes a model where each data centre is paired with an on-site SMR, creating a self-contained power source that avoids transmission losses and diesel generators [238-246]. Prasad, while acknowledging the need for reliable supply, advocates a strategy of geographic diversification – multiple landing points across the country and locating data centres close to renewable energy sources (REOs) rather than a single nuclear hub [62-68][239-246]. The two visions conflict on whether concentration or dispersion is the better path to reliable, green power for AI workloads.
POLICY CONTEXT (KNOWLEDGE BASE)
AI’s rising power demand is prompting a reconsideration of nuclear as a stable, low-carbon source, yet co-location of data centres with power plants faces regulatory scrutiny (FERC). Sustainability discussions also highlight the need for diversified, renewable-proximate sites to mitigate environmental impacts [S42][S48][S44].
Scale of GPU infrastructure required for India’s AI ecosystem – 128,000 GPUs needed vs current mission‑driven scaling seen as sufficient
Speakers: Tarun Dua, Parth Sarthi
Projection that India will need at least 128,000 GPUs to meet organizational demand, with rapid growth ahead (Tarun Dua) Emphasis that compute growth drives AI progress, implying energy must keep pace (Parth Sarthi)
Tarun estimates a minimum national requirement of 128 000 GPUs, based on 1 000 organisations each needing 128 GPUs, and notes India is currently about 18 months behind global compute capacity [194-204]. Parth points to the rapid expansion already achieved under the India mission – from 38 000 to over 50 000 GPUs – and argues that continued mission support will meet the scaling needs, emphasizing that compute growth (driven by scaling laws) is the primary engine of AI progress [131-148][402-405]. The two speakers differ on the magnitude of the short-term shortfall and the urgency of further GPU procurement.
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence shows India’s current GPU capacity (40-50k units) falls far short of the estimated requirement (≈200k GPUs), with recent statements noting growth to only 50-60k GPUs, underscoring a significant policy-level infrastructure gap [S40][S41].
Unexpected Differences
Nuclear SMR timeline and suitability for AI data‑centre power
Speakers: Tarun Dua, Shri Ghanshyam Prasad
Advocacy for co‑locating nuclear SMR power with data centers to ensure stable, low‑cost energy (Tarun Dua) Mention of indigenization targets for power‑sector equipment (e.g., IGBTs) as part of broader hardware self‑reliance (Shri Ghanshyam Prasad)
A power‑sector regulator (Prasad) expressing caution about SMR deployment timelines and land‑use constraints contrasts sharply with an industry entrepreneur (Tarun) who sees SMRs as a near‑term, 3‑5 year solution. The divergence is surprising given their overlapping expertise in energy infrastructure.
POLICY CONTEXT (KNOWLEDGE BASE)
Corporate pilots such as Google’s SMR partnership target operational SMRs by 2030-2035 for AI workloads, while broader analyses link AI-driven electricity demand to renewed interest in nuclear as a low-carbon, reliable power source [S47][S42].
Industry‑led research roadmaps versus balanced academia‑industry research model
Speakers: Tarun Dua, Professor Jayadeva
Suggestion that industry should define research priorities and fund targeted projects to bridge the gap (Tarun Dua) Call for industry‑driven, goal‑oriented research roadmaps while also supporting exploratory basic research (Professor Jayadeva)
Tarun’s push for strictly industry‑driven, time‑bound research funding clashes with Jayadeva’s insistence that basic, curiosity‑driven research must coexist, a stance not typically expected from a senior academic who often advocates for more industry control.
POLICY CONTEXT (KNOWLEDGE BASE)
Both Indian and European AI strategies stress the need for collaborative, government-supported research roadmaps that combine industry’s practical timelines with academia’s long-term inquiry, moving beyond purely industry-led approaches [S37][S38][S39].
Overall Assessment

The panel shows strong consensus on the overarching goal of building a sovereign AI ecosystem for India, but the discussion reveals substantive disagreements on three fronts: (1) the energy mix and deployment timeline for powering AI data centres – nuclear SMRs versus renewable‑centric, diversified approaches; (2) the mechanism for steering AI research – industry‑only, deadline‑driven roadmaps versus a blended model that retains basic research; (3) the scale and urgency of GPU procurement – a projected 128 000‑GPU shortfall versus confidence that the current mission‑driven scaling trajectory will suffice. These divergences reflect differing institutional perspectives (government regulator, private infrastructure provider, academia, and compute‑focused technologist) and indicate that coordinated policy, investment, and research‑governance frameworks will be essential to align energy, compute, and talent strategies.

Moderate to high disagreement on implementation pathways, but low disagreement on the end goal. The lack of alignment could delay the creation of a fully sovereign AI stack unless a cross‑sectoral roadmap reconciles energy‑policy choices, research funding models, and compute capacity planning.

Partial Agreements
All speakers concur that India must build a sovereign AI ecosystem that is owned end‑to‑end – power, compute, data, and talent – but each stresses a different lever (renewable power, GPU capacity, talent pipelines, or compute scaling) as the primary path forward [3-7][56-61][194-204][108-118][131-148]. The shared goal is clear, yet the routes to achieve it diverge.
Speakers: Suvrat Bhoosha, Shri Ghanshyam Prasad, Tarun Dua, Professor Jayadeva, Parth Sarthi
Assertion that sovereign AI demands ownership of power, hardware, and talent—not just imported resources (Suvrat Bhoosha) Power reliability and green supply challenges for large‑scale data centres (Shri Ghanshyam Prasad) Projection that India will need at least 128,000 GPUs to meet organisational demand (Tarun Dua) Need for a critical mass of researchers, clear career paths, and stronger incentives for PhDs to stay in India (Professor Jayadeva) Emphasis that compute growth drives AI progress, implying energy must keep pace (Parth Sarthi)
Takeaways
Key takeaways
AI‑driven workloads will require massive, reliable, and preferably green power; data centre load will be variable and demand N+1+1 redundancy. Co‑locating data centres with stable power sources such as nuclear SMRs and renewable hubs is seen as a strategic path. Scaling laws demonstrate that increasing compute (GPU count) directly improves model performance, making GPU scaling the primary bottleneck for sovereign AI. India will need at least ~128,000 GPUs to satisfy current organizational demand, with rapid growth expected. A robust talent pipeline is essential: more PhDs, clearer career incentives, and stronger industry‑academia collaboration are needed. Research funding models must balance goal‑directed industry projects with exploratory basic research and improve IPR sharing mechanisms. Physical data collection (IoT, smart meters, digital twins) is currently insufficient; building an end‑to‑end sensor and data infrastructure is critical for AGI and data sovereignty. Indigenous hardware development is progressing in chip design, but domestic manufacturing and full indigenization of power‑sector equipment (e.g., IGBTs) remain priorities. Data and models must remain within India to ensure sovereignty; the India mission is creating local GPU clusters and indigenous datasets.
Resolutions and action items
Accelerate rollout of smart meters: >3 crore installed, 25 crore planned within 2‑3 years, to enable granular data collection and domestic SCADA development. Leverage the RDSS (Reforms and Development Support Scheme) program to fund reliability and automation projects, including indigenous SCADA systems. Continue scaling GPU infrastructure under the India mission, targeting 38,000–50,000 GPUs now and expanding further. Government to channel ANRF (Anusandan) fund and other large‑scale grants (e.g., 20,000 cr for CCUS) toward university‑industry research clusters and commercialization pathways. Industry (e.g., E2E Networks) to define research roadmaps and collaborate with academia on targeted problems such as optoelectronic networking and co‑packaged optics. Explore deployment of SMR nuclear reactors near future data‑centre campuses, with land‑contingency planning for containment zones. Create multiple international landing points for power transmission (e.g., Vizag, Gopalpur) to diversify supply and support data‑centre clusters. Encourage joint PhD‑industry programs and MS‑Research degrees to provide paid research opportunities and retain talent.
Unresolved issues
Exact timeline and feasibility for large‑scale SMR deployment to power data centres (3‑5 years vs longer). Concrete mechanisms to align industry‑driven research funding with academic incentives and IPR sharing. How to fully integrate IoT sensors and digital twins across the distribution network and ensure all data stays within Indian data centres. Specific policies to accelerate domestic manufacturing of critical semiconductor components and reduce import dependence. Detailed plan for achieving 100 GW of hydro‑pump storage and 100 GW of nuclear capacity by 2047, including financing and site selection. Strategies to overcome cultural and financial deterrents for students pursuing PhDs in India, especially for women. Clarification on the role of the India Energy Stack in addressing physical‑layer data gaps.
Suggested compromises
Adopt a mixed research portfolio: combine goal‑oriented industry projects with longer‑term exploratory academic work. Use a hybrid power mix for data centres—green renewables where possible, supplemented by nuclear SMRs and battery storage—to meet reliability and cost goals. Allow parallel development across sectors (energy, compute, hardware, talent) rather than a strictly sequential rollout. Encourage industry to fund research while government provides matching grants and policy support, reducing the sole reliance on private capital. Maintain data sovereignty by hosting both the physical sensor data and AI models on domestic infrastructure, while still leveraging global expertise where needed.
Thought Provoking Comments
Building true frontier intelligence from India is a monumental ecosystem play. We cannot simply import models and talents, run them on borrowed infrastructure, and call them our own.
Sets the foundational premise that AI sovereignty requires an integrated approach across hardware, talent, and research, challenging the notion that importing technology is sufficient.
Frames the entire discussion, prompting each panelist to address their domain (energy, compute, talent) as part of a cohesive ecosystem rather than isolated solutions.
Speaker: Suvrat Bhoosha
We are now transitioning from 10‑50 MW data centers to gigawatt‑scale facilities. The challenge is managing a variable, high‑density load with N+1+1 reliability and ensuring green power supply via solar, wind, hydro, and storage.
Provides a concrete, data‑driven picture of India’s upcoming energy demand for AI workloads and highlights the technical reliability and sustainability hurdles.
Triggers deeper conversation about grid planning, renewable integration, and the need for diversified landing points, influencing subsequent questions on energy‑data center co‑location and policy support.
Speaker: Shri Ghanshyam Prasad
India moved from building for the West, to building for ourselves, and now to building for the world – becoming an innovation hub for cloud infrastructure.
Articulates a strategic shift in India’s tech narrative, emphasizing self‑reliance and global leadership rather than mere service provision.
Reorients the panel to view compute infrastructure as a exportable asset, leading to discussions on scaling GPU capacity and serving international demand.
Speaker: Tarun Dua
Students prefer foreign PhDs because career pathways and incentives are unclear in India; we need industry‑university collaboration, clearer IPR frameworks, and early research opportunities to retain talent.
Diagnoses the talent pipeline problem, linking it to systemic incentives, cultural expectations, and the gap between academia and industry.
Steers the conversation toward concrete policy and institutional reforms, prompting follow‑up remarks about government funds (ANRF) and industry‑driven research programs.
Speaker: Professor Jayadeva
Scaling laws show that every 10× increase in compute yields predictable performance gains; however, the next breakthrough is reasoning models trained via RL environments, which rely more on domain experts and ordinary compute than on massive GPU farms.
Challenges the prevailing belief that raw GPU count is the sole driver of AI progress, introducing the importance of data environments and human expertise.
Shifts the panel from a hardware‑centric view to a balanced perspective that values algorithmic innovation and domain‑specific data, influencing later audience questions about data generation and IoT.
Speaker: Parth Sarthi
We have installed over 30 million smart meters (with 25 million more in pipeline) to ensure data stays within India, and we are developing indigenous SCADA systems to create a secure, end‑to‑end digital twin of the power grid.
Directly addresses the audience’s concern about the physical data layer, highlighting concrete steps toward data sovereignty and infrastructure digitization.
Introduces the IoT/physical‑layer dimension into the dialogue, linking energy infrastructure to AI data needs and prompting further discussion on security and domestic data centers.
Speaker: Shri Ghanshyam Prasad
SMR nuclear reactors could power data‑center campuses directly, eliminating transmission fees and reducing reliance on diesel generators; modular 220 MW units allow scalable, land‑efficient deployment.
Presents a forward‑looking, alternative energy solution tailored to the high‑density, reliable power needs of future AI data centers.
Expands the energy conversation beyond renewables, prompting the panel to consider nuclear as a viable component of the AI ecosystem and influencing the discussion on site‑specific power planning.
Speaker: Tarun Dua
Research should not be purely arm‑length; we need integrated labs where industry and academia co‑develop problems, as exemplified by Cadence’s partnership with Berkeley, to foster both short‑term applicability and long‑term innovation.
Advocates for a hybrid research model that balances goal‑directed projects with exploratory science, addressing earlier concerns about funding and collaboration.
Reinforces earlier points about talent pipelines and leads to suggestions for joint funding mechanisms and industry‑led research programs.
Speaker: Professor Jayadeva
Most microprocessor IP is actually designed in India; the bottleneck is domestic volume to commercialize these designs, which will in turn drive a self‑sustaining semiconductor ecosystem.
Highlights the often‑overlooked contribution of Indian engineers to global chip design and frames volume demand as the key to full indigenization.
Links the compute hardware discussion to broader manufacturing and supply‑chain considerations, supporting the narrative of building a complete sovereign AI stack.
Speaker: Tarun Dua
Overall Assessment

The discussion evolved from a high‑level call for an AI ecosystem to a nuanced, multi‑layered roadmap. Early remarks about energy demand and self‑reliance set the stage, but the most pivotal shifts occurred when Parth introduced scaling laws and the importance of domain‑specific RL environments, and when Ghanshyam addressed the physical data layer through smart meters and indigenous SCADA. These insights redirected the conversation from pure compute capacity to the intertwined challenges of energy reliability, data sovereignty, talent pipelines, and collaborative research. Collectively, the highlighted comments steered the panel toward concrete, cross‑sectoral actions required for India to achieve sovereign frontier AI.

Follow-up Questions
Should data centers be co‑located with renewable energy generation hubs (e.g., Rajasthan) to reduce transmission losses and ensure green power supply?
Locating compute facilities near clean energy sources could improve reliability, lower costs, and help meet green‑power commitments for AI workloads.
Speaker: Suvrat Bhoosha, Tarun Dua, Shri Ghanshyam Prasad
How can India develop a comprehensive physical‑layer IoT ecosystem (sensors, SCADA, digital twins) across the power sector to generate the massive data needed for AI/AGI?
Large‑scale, real‑time grid data is essential for training advanced models and for optimizing grid operations; current gaps in distribution‑level automation limit data availability.
Speaker: Audience member (Pradeep Subramaniam)
Why is the government not providing stronger support to create the large‑scale IoT data infrastructure needed for AI?
Policy and funding mechanisms are required to accelerate deployment of smart meters, communication networks, and indigenous SCADA platforms.
Speaker: Audience member (Pradeep Subramaniam)
How can India ensure that all data generated by smart meters and other IoT devices remains within the country and is hosted in Indian data centers to guarantee data sovereignty and security?
Critical infrastructure data must be stored domestically to avoid security risks and to comply with sovereign AI objectives.
Speaker: Audience member (Pradeep Subramaniam)
What indigenous, domain‑specific datasets (e.g., agricultural loan assessment in Tamil, legal aid in Hindi) are needed to train frontier models that address Indian problems, and how can they be created at scale?
Tailored datasets are crucial for building AI systems that solve real Indian use‑cases and avoid reliance on foreign data sources.
Speaker: Parth Sarthi, Audience member (Pradeep Subramaniam)
How can industry‑academia collaboration be structured to deliver both goal‑directed research for immediate product needs and exploratory research for long‑term innovation?
Effective partnership models, IP‑sharing frameworks, and joint funding are needed to translate academic breakthroughs into commercial AI solutions quickly.
Speaker: Tarun Dua, Professor Jayadeva, Professor (multiple)
What incentive structures and career pathways can be created to attract more PhDs and research talent (especially women) to stay in India and work on AI?
A clear, well‑paid research career path is essential to build the critical mass of talent required for sovereign AI development.
Speaker: Professor Jayadeva
How can India accelerate the indigenization of power‑sector equipment (e.g., IGBTs, smart meters, SCADA systems) to reduce dependence on imports?
Domestic manufacturing of key components will improve supply‑chain security and lower costs for AI‑driven grid modernization.
Speaker: Shri Ghanshyam Prasad, Audience member
What is the feasibility, timeline, and regulatory pathway for deploying Small Modular Reactors (SMRs) to power high‑density data centers?
SMRs could provide reliable, low‑carbon baseload power for AI compute clusters, but their rollout schedule and land‑use constraints need clarification.
Speaker: Tarun Dua, Shri Ghanshyam Prasad
What detailed energy‑consumption projections for AI data centers are needed over the next 12‑36 months, and how should the national grid be modeled to accommodate them?
Accurate forecasting is required to plan transmission upgrades, renewable integration, and capacity reserves for the anticipated AI load.
Speaker: Suvrat Bhoosha (question to Shri Ghanshyam Prasad)
How can India develop indigenous, secure SCADA platforms and the associated cybersecurity framework for the power grid?
Building home‑grown SCADA systems will reduce reliance on foreign vendors and address national security concerns while supporting AI‑driven grid analytics.
Speaker: Shri Ghanshyam Prasad
How can LLMs and digital‑twin technologies be leveraged to shorten the prototype cycle time for physical hardware (e.g., chips, sensors) needed for AI infrastructure?
Reducing design‑to‑fabrication time can accelerate deployment of specialized AI hardware and IoT devices.
Speaker: Tarun Dua
What roadmap is needed to achieve 100 % domestic content for microprocessor design and manufacturing, and how will this impact AI compute supply?
Full indigenization of semiconductor IP and fabrication is critical for self‑reliant AI hardware ecosystems.
Speaker: Professor Jayadeva, Tarun Dua, Shri Ghanshyam Prasad
How should a unified framework be created to develop primary, secondary, and tertiary sectors in parallel for AI‑enabled manufacturing (e.g., semiconductor fabs, 9 GW power for data centers)?
Coordinated policy across sectors is needed to avoid bottlenecks and ensure synchronized growth of hardware, energy, and AI capabilities.
Speaker: Audience member (unspecified)
What policies and incentives are required to encourage adoption of physical‑layer IoT in low‑labour‑cost sectors (e.g., vending machines) so that AI can create new jobs rather than replace existing ones?
Understanding how to integrate sensors and connectivity in traditionally labour‑intensive businesses will unlock AI‑driven productivity gains and new employment opportunities.
Speaker: Audience member (unspecified)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How Multilingual AI Bridges the Gap to Inclusive Access

How Multilingual AI Bridges the Gap to Inclusive Access

Session at a glanceSummary, keypoints, and speakers overview

Summary

The India AI Summit 2026 opened with Markus Reubi stressing that AI can serve the public good only if it accommodates all languages and cultures, framing linguistic inclusion as a democratic imperative rather than merely a technical issue [4-5]. He linked the summit to a series of international events, noting Switzerland’s role in the Paris 2025 public-interest AI process, the current India summit, and the upcoming Geneva AI Summit in 2027 [6-7].


Torsten Schwede announced three new Indo-Swiss joint research calls covering geosciences, social sciences, and One Health, and introduced a longer-term Indo-Swiss Research Framework Program that will include artificial intelligence as a priority area [27-30][41-45]. He also unveiled new “Explore, Experiment, and Expand” grants, increased mobility funding, and plans for flagship events to sustain collaboration between Swiss and Indian researchers [46-49].


Nina Frey then introduced the panel and highlighted the Bhashini initiative, which aims to overcome language barriers by developing AI tools for 22 Indian constitution-listed languages, now expanded to 36, and by creating speech-recognition, translation, text-to-speech, OCR and digital dictionaries [78-86][89-95]. Nag explained that Bhashini’s data were collected through field teams building monolingual and bilingual corpora, a process he likened to teaching a child by giving it more books, and described early applications such as voice-first agricultural advice for farmers and interactive manuscript platforms [100-108].


Aya Bedir described Current AI as a public-private partnership with initial $400 million commitments aiming for $2.5 billion, dedicated to multilingual diversity and cultural preservation, and warned that large-tech data-scraping often treats communities as mere data rather than partners [124-133][135-144][158-164]. She emphasized the need for community-led data collection and frugal AI solutions that work under resource constraints, especially for under-served languages in regions like the Arab world [166-172].


Alex Ilic presented the open-source Apertus model, noting that talent scarcity-only about a hundred experts worldwide-limits foundation-model development and that academia must be empowered with compute and benchmarks [183-186][210-218]. He reported that Apertus currently includes 1,000 languages, with 60 % English data, and outlined a strategy to raise performance for non-English languages while collaborating with initiatives such as Bhashini and Singapore’s C-Line model [194-199][210-212].


Annie Hartley illustrated the dangers of deploying language models in high-stakes medical contexts, recounting a case where a model gave dangerous advice for diabetic ketoacidosis in an Ethiopian language, underscoring the need for rigorous real-world validation [287-295]. She introduced the MOVE (Massive Open Online Validation and Evaluation) project, which gathers feedback from clinicians and patients to continuously improve models, arguing that neutral academic oversight is essential and requires dedicated funding [328-335][339-340].


Across the session, speakers converged on the conclusion that multilingual, culturally aware AI must be built through coordinated funding, open models, community-driven data, academic talent, and systematic validation to achieve equitable public benefit [5][45][135][210][328].


Keypoints


Multilingual AI is framed as a democratic imperative and a cornerstone for inclusive digital public services.


Markus Reubi stresses that “AI can only serve the public good if it serves all languages and all cultures” and calls linguistic inclusion a “democratic imperative” [4-6][5]. He highlights Switzerland’s commitment to multilingual models such as Apertus, describing them as “foundations for inclusive digital public services” [14-15].


The Indo-Swiss partnership announces new joint research calls and a strategic AI funding framework.


Torsten Schwede announces three new joint calls in geosciences, social sciences, and One-Health, and introduces a longer-term Indo-Swiss Research Framework Program that earmarks AI as a high-priority topic [27-34][41-45]. He also unveils new “Explore, Experiment, and Expand” grants, increased mobility funding, and plans for flagship events to sustain the collaboration [46-49].


India’s Bhashini initiative showcases a large-scale effort to build multilingual AI capabilities despite data scarcity.


Amitabh Nag explains that Bhashini targets 22 constitutional languages (now expanding to 36) across speech-to-text, translation, speech synthesis, OCR, and digital dictionaries [77-84][89-98]. He describes the “brute-force” field collection of monolingual and bilingual corpora to overcome the “non-availability of digital data” [101-108], and cites early use-cases such as a voice-first agricultural advisory system for farmers [107].


Ethical and cultural concerns are raised about big-tech data practices and the need for community-led, culturally respectful AI.


Aya Bedir warns that large-scale multilingual projects often rely on “scraping data” and treating “individuals and communities as data,” which can be “condescending” [158-166][160-164]. She argues that true preservation of language and culture requires “getting as close as possible to the communities themselves” [128-138].


Open-source multilingual models, talent, compute resources, and real-world validation are identified as critical for trustworthy AI.


Alex Ilic outlines the bottlenecks of infrastructure and scarce talent, describes Apertus as an open foundation model, and stresses the need to expand language coverage beyond the current 60 % English-dominant training set [183-190][194-202][215-218]. A participant from NTU highlights the C-Line model for Southeast Asian languages and the importance of “frugality” and “sovereignty” in model development [256-262][265-272]. Annie Hartley adds a medical perspective, showing how inaccurate multilingual outputs can endanger patients and emphasizing the MOVE (Massive Open Online Validation and Evaluation) project for high-stakes real-world testing [287-298][329-336].


Overall purpose/goal:


The discussion serves to mobilise international stakeholders-governments, research institutions, and private partners-around the development of multilingual, culturally-aware AI. It aims to secure funding, share open-source tools, and establish governance frameworks that ensure AI serves the public good across all languages and cultures.


Overall tone:


The conversation begins with a formal, diplomatic tone emphasizing cooperation and shared vision. It then shifts to an enthusiastic, technical tone when announcing funding programmes and describing concrete AI projects. Mid-session, a more cautionary and critical tone emerges around ethical data practices. The final segment adopts a pragmatic, urgent tone focused on real-world impact, validation, and the need for inclusive, accountable AI. Throughout, the tone remains collaborative and solution-oriented.


Speakers

Markus Reubi – Swiss representative; opening speaker introducing the session.


Torsten Schwede – President of the Swiss National Science Foundation; involved in Indo-Swiss research collaboration [S7].


Nina Frey – Executive Director of ICAIN (referred to as ICANN in the transcript) and session moderator [S18].


Alex Ilic – Founder and Executive Director of the AI Center; co-founder of ICAIN [S17].


Amitabh Nag – CEO of Bhashini (also called Bajini), the national language initiative [S10].


Aya Bedir – CEO of Current AI, a public-private partnership focused on multilingual AI [S12].


Petri Myllymäki – Representative of the Finnish Supercomputing Centre and ELIS Network; founding member of ICAIN; member of the UN Secretary-General’s Age Lab [S1].


Annie Hartley – Leader of the LIGHTS lab (Laboratory for Intelligent Global Health and Humanitarian Response Technology); affiliated with EPFL and Yale University [S5].


Participant – Dean of the College of Humanities, Arts and Social Sciences, NTU Singapore; historian (no surname provided).


Additional speakers:


Professor Thorsten Svede – President of the Swiss National Science Foundation (introduced by Markus Reubi).


Damian – Representative from the Finnish Supercomputing Centre (acknowledged by Nina Frey).


Martin Disney – Founder of Current AI (mentioned by Aya Bedir).


Full session reportComprehensive analysis and detailed insights

Markus Reubi opened the summit by framing multilingual artificial intelligence as a democratic imperative, not merely a technical challenge, and warned that linguistic exclusion is a persistent barrier to digital participation [4-5]. He cited India’s Barshini Initiative as an early example of collaborative multilingual effort and situated the event within a broader international trajectory-from the Paris 2025 public-interest AI process through the India AI Summit 2026 to the planned Geneva AI Summit 2027-underscoring Switzerland’s commitment to a continuous, cooperative global AI governance framework [6-8].


Torsten Schwede announced three new Indo-Swiss joint research calls covering geosciences, social sciences and a One Health programme [27-34], and introduced the longer-term Indo-Swiss Research Framework Programme that earmarks artificial intelligence as a high-priority thematic area [41-45]. To stimulate collaboration he unveiled the “Explore, Experiment, and Expand” grants, increased mobility funding for consortium members, and a schedule of flagship bilateral events in Switzerland and India [46-49]. (The transcript contains a transcription error “IndusVis”; the correct term is Indo-Swiss.)


Nina Frey, Executive Director of ICANN, introduced the panel and handed the floor to Amitabh Nag, who detailed India’s Bhashini initiative. Nag explained that Bhashini initially targets the 22 languages of India’s Eighth Schedule, providing automatic speech recognition, bidirectional text-to-text translation, text-to-speech synthesis, optical character recognition and a digital dictionary [78-86]; the programme has since expanded to 36 languages and is working on digitising unwritten tribal languages [89-95]. Confronted with a “non-availability of digital data” problem, Bhashini deployed roughly 200 field teams to create monolingual and bilingual corpora-a “brute-force” data-collection effort likened to teaching a child by giving it more books [100-108]. Early applications include a voice-first advisory system for farmers that answers agricultural questions in the farmer’s native language [107-108].


Aya Bedir presented Current AI, a public-private partnership that has secured an initial $400 million commitment (aiming for $2.5 billion) from the French government and multiple other national and philanthropic partners [124-133]. From its inception, Current AI has placed “multilingual diversity” at the centre of its mission, extending the focus to broader cultural preservation [135-144]. Bedir warned that large-tech projects often rely on “brute-force data-scraping” and treat communities as raw data rather than partners, urging a shift toward community-led data collection that respects consent and cultural ownership [158-164].


Alex Ilic, co-founder of the AI Centre and ICANN, described Apertus, an open, transparent multilingual foundation model developed by ETH Zurich and EPFL [13-15][187-194]. He highlighted a severe talent bottleneck-only about one hundred experts worldwide can build such foundation models-and argued that academia needs both compute resources and culturally relevant benchmarks to close the gap [183-186][210-218]. Apertus currently supports roughly 1 000 languages, though its training data remains 60 % English [195-196]. Ilic outlined a step-by-step plan to raise non-English performance, assess the cost of achieving parity, and leverage collaborations with Bhashini and Singapore’s C-Line model [197-202][210-212].


Petri Myllymäki, representing the Nordic ELIS network, reinforced the human-rights framing of language access, noting that the UN’s International Independent Scientific Panel on AI identified “access to language and culture” as a fundamental right [226-229]. He used a vivid metaphor that all countries should be “invited to the dinner table, not part of the menu” to stress inclusive participation [238-239].


A senior participant from NTU Singapore elaborated on the C-Line model, which covers 13 Southeast Asian languages (including Tamil) and is built partly on Apertus [256-262]. The speaker emphasized frugal AI-achieving effective models with limited data and compute-and linked this to sovereignty, arguing that AI systems must respect the power of societies and individuals, accommodate code-switching, and avoid a one-size-fits-all English-centric approach [265-272][277-281].


Annie Hartley, leading the LIGHTS laboratory, illustrated the high-stakes risks of inadequately validated multilingual models. In an Ethiopian language, a model erroneously advised “thou shalt not eat insulin on a Tuesday” for diabetic ketoacidosis, showing how reliance on generic corpora (e.g., the Bible) can produce dangerous medical advice [287-295]. She advocated systematic real-world testing through the MOVE (Massive Open Online Validation and Evaluation) project, which gathers feedback from clinicians and patients to iteratively improve models, and called for dedicated funding to support such validation pipelines [328-336][339-340].


Core agreements emerging from the session were:


* Multilingual AI is essential for democratic participation and constitutes a recognized human right [4-5][226-229].


* Coordinated, multistakeholder governance (ICANN, Current AI, Indo-Swiss programmes) is required to pool funding, talent and infrastructure [57-60][126-133][41-45].


* Open-source foundations such as Apertus accelerate inclusive development [13-15][187-194].


* Rigorous real-world validation is indispensable for safety-critical domains [287-295][328-336].


Points of divergence were noted. Bedir emphasized ethical concerns about big-tech data-scraping, whereas Ilic focused on the need for more compute and benchmark-driven improvements [158-164][191-193]. Ilic’s resource-heavy scaling strategy contrasted with the Singapore participant’s advocacy for frugal, sovereignty-preserving approaches that rely on small-data techniques [196-202][262-268]. Finally, Ilic’s emphasis on benchmark-driven evaluation differed from Hartley’s call for field-based, high-stakes validation, highlighting a tension between laboratory metrics and real-world safety [191-193][299-306].


Key take-aways and action items:


* Official launch of the three Indo-Swiss joint calls (geosciences, social sciences, One Health) and establishment of the Indo-Swiss Research Framework Programme [27-34][41-45].


* Introduction of the Explore, Experiment, and Expand grants, expanded mobility funding, and a calendar of flagship bilateral events [46-49].


* Commitment to broaden Apertus’s language coverage and to collaborate with Bhashini and C-Line for cross-regional multilingual capability [187-194][210-212].


* Demonstration of a joint Current AI-Bhashini device scheduled for 15:30 in Room 10, showcasing community-centric language technology [145-146].


* Adoption of frugal data-collection methods and respect for national sovereignty, as exemplified by C-Line [262-268].


* Deployment of the MOVE project to collect continuous feedback from clinicians and patients in high-stakes settings [328-336].


* Ongoing facilitation by ICANN to ensure all nations are genuine participants in AI governance rather than mere data sources [57-60][238-239].


Unresolved challenges remain:


* Developing scalable yet ethically sourced data pipelines for low-resource languages [158-164][100-108].


* Defining multilingual performance metrics that go beyond English-centric benchmarks [191-193].


* Securing sustainable financing for validation in conflict-affected or resource-constrained contexts [339-340].


* Reconciling national sovereignty with multinational AI governance, especially regarding code-switching and dialectal variation [265-272][277-281].


* Estimating the true cost of raising non-English language performance to parity with English [196-202].


* Expanding the global talent pool capable of building foundation models [183-186][210-218].


The session concluded with a commitment to continue the dialogue at the Geneva AI Summit 2027 and to operationalise the announced programmes through concrete pilots and funding mechanisms.


Session transcriptComplete transcript of the session
Markus Reubi

as a bridge to democratic access. Switzerland is very pleased to contribute to this global conversation at a pivotal time, a pivotal moment for responsible AI. Our message, which was supposed to be delivered by our president, is very clear. AI can only serve the public good if it serves all languages and all cultures. Today, linguistic exclusion remains one of the most persistent barriers to digital participation, ensuring multilingual access is therefore not only a technical challenge, it’s a democratic imperative. This discussion forms part of the international arc that began with the Paris 2025 public interest AI process, continues here at India AI Summit 2026, and will advance further when Switzerland will happily host the Geneva AI Summit. The Geneva AI Summit in 2027.

Our shared objective is continuity, cooperation and genuinely global approach to AI governance. Switzerland is proud that this session brings together partners who embody open and collaborative innovation. India’s Barshini Initiative, current AI that emerged from the French AI Summit and then many partners from the broader network of academic and policy institutions of ICAIN, the International Computation and AI Network. Such partners as ELIS, NTU Singapore and of course the Swiss partners ETH and EPFL. ICAIN really reflects Switzerland’s commitment to equitable access to compute data and multilingual models. A notable example is Apertus, which maybe many of you have heard of. It was developed by ETH Zurich and EPFL, fully open and transparent multilingual model designed to support public interest applications across diverse linguistic communities.

As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services and for strengthening participation across societies. Allow me to briefly, just very briefly outline today’s agenda. We will begin with the announcement of the launch of the three new joint calls under the lead of the Indo -Swiss Joint Research Programme, JRP, which is making a further strengthening of our bilateral ties in science, innovation and research between Switzerland and India. This will be followed by a panel discussion. We have distinguished international guests and I’m very happy to announce that this will be moderated by my colleague Nina Frey, the Executive Director of ICAIN. Thank you so much for attending. I will hand over the floor to the next speaker, Professor Thorsten Svede, President of the Swiss National Science Foundation.

Very warm welcome. Thank you.

Torsten Schwede

Your Excellencies, ladies and gentlemen, namaste. It’s my great pleasure to be here today with you. It’s a moment to highlight a particularly exciting moment in the Indo -Swiss research collaboration. As many of you know, Switzerland and India have long -standing trusted partnership in research built on reciprocity, on joint excellence, and on shared priorities. Today, this collaboration is stronger than ever, and I’m delighted to announce three new calls for joint research projects, as well as the launch of our new Indo -Swiss research framework program, between the Swiss National Science Foundation and the Swiss National Science Foundation. and our Indian partner organizations. This is a really remarkable convergence that underscores both the depths and the breadth of our bilateral engagements.

The three calls for joint research programs span a very diverse range of disciplines and are designed to foster cutting -edge, high -impact research. The first two calls that we launched earlier this year are in the geosciences and in the social sciences. Together with the Indian Ministry of Earth Sciences, we are inviting proposals on natural hazards in mountain regions, a field of great relevance for both our countries as we are each facing very unique geological challenges. In parallel, our call with the Indian Council of Social Science Research opens the door for joint projects on pressing social and societal questions, again strengthening our collaboration in a domain where cross -cultural perspectives are significantly enriching the research outcomes.

And two weeks ago, the Swiss National Science Foundation, together with the Indian Department of Biotechnology and the Indian Council of Medical Research, launched a third call focused on One Health, a topic of real global urgency. This One Health call is particularly important for us. It reflects many months of preparations and close coordinations with our Indian partners and embodies a holistic approach needed to understand interconnected health of humans, animals, and the environment. The challenges we face in this area know no borders, and international collaboration is indispensable. We therefore anticipate a very high uptake and interest and participation of researchers in both our communities. Taken together, these three simultaneous calls represent an exceptional moment in IndusVis research cooperation.

They showcase our commitment enabling ambitious science from fundamental research questions in the natural and the life sciences to complex issues shaped by society, geography or technology. And with each call, we reaffirm our shared belief that long -term co -created research is the key to addressing the major challenges of our times. So building on these strong foundations now is the right moment to announce a new strategic long -term collaboration, the Indo -Swiss Research Framework Program between the SNSF and our Indian partner organizations. We aim to create a program in which all researchers wishing to contribute to the Indo -Swiss cooperation can find appropriate support. Thematic calls on strategic areas will be launched together with our Indian partner partners and remain at the core of this program.

And to this audience, it might not come as a real surprise. that one of the high -priority topics we are currently considering is artificial intelligence. In addition to these bilateral and multilateral calls, I’m also pleased to announce that we are launching several new measures and funding schemes to support collaborative research. With our brand -new Explore, Experiment, and Expand grants, we want to give consortia the opportunities to explore new collaborations, new networks, new partnerships. We want to allow them to experiment with blue -sky thinking topics and methodologies that haven’t been tried before, but we also want to allow them to expand on already established functional collaborations and build them in an innovative way into the future.

We’re also increasing mobility funding for existing consortia to make sure that every project we fund by our program can lead to a durable collaboration, impactful events that connect with the wider world, and the wider society. and early career researchers can truly benefit from the mobility and the capacity building. We plan to hold frequent flagship events, both in Switzerland and in India, to keep connecting our various partners of this program from funding actors, beneficiaries of the calls, policy makers and prospective applicants and early career researchers. So make sure you follow our website and social media and there’s more updates coming soon. I want to extend my sincere thanks to all our partner organizations here in India for their continued trust and collaboration and the two research communities in both our countries that show a lot of enthusiasm and engagement in these programs.

So I encourage all interested researchers here in the room and out there to take advantage of these new opportunities and continue building the bridges that make our partnership so successful. Thank you. Thank you very much for your attention.

Nina Frey

Thank you so much. Thank you so much also from my side. My name is Nina Frey or Katharina Frey as my colleague or former colleague, Markus Reuvi, has introduced me. I am the executive director of ICANN, which is this network linking already academic partners from Europe, Africa, and Singapore. And I’m very glad that I have many representatives from the network that will be on the panel and actually also one of the board members sitting in the second row from the Finnish Supercomputing Center. Thank you, Damian, for coming. So we have such a big panel representing ICANN that there’s not even a space for me, so I will be standing here. And I would like to invite my panelists to take seats on the different names.

I will introduce you and hand over the mic to you. In a minute. Please have a seat. turned out there was a seat for me yeah I know we do a group photo at 1225 ok oh now so you have to bear with us this afternoon ok you can have a fast smile I have to stand oh we have to stand how can I get in ok ok so we have to stand ok thank you now we can all take a picture do I have a mask to join me here no ok we have we have a mask to join me here ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok Thank you.

Thank you. Thank you. Wonderful. Thank you so much. Thank you so much for bearing with us, for taking pictures. We actually talk about language, but let me think about an analogy to pictures. We’ll dive right into the importance of, I would say, as to the language question, obviously also the cultural and the contextual embedment of different AI in the different settings. So again, allow me to extend my thanks to all my distinguished panelists for coming, for also allowing us to show how this ICANN collaboration works from very different angles. The idea of this next 40 minutes is really to try to give a red line, I think you say, between the different summits. Actually, it started obviously in Bletchley, and I hope we can then showcase how this topic of language and cultural diversity was somehow present in all the different summits and unites us all.

Since we’re here in your host country, allow me to hand over the mic to you to talk about the ICANN collaboration. And also to share with us, like, why did Bajini been funded? you had presented your work this morning to me and Alex. It was very impressive how it translated immediately live from Hindi to German to English. But please share with us maybe the next five minutes what your work is, what has it been, and where you’re going. Thank

Amitabh Nag

you. Yeah, thank you very much, Nina, and thanks for inviting me here. Bhashini stands for Bhasha Interface for India, so it is basically looking at 22 languages which are enshrined in our Eighth Schedule of Constitution, which basically says that we will conduct or we will have these languages as the languages to start off with for our work in the regions. We started off as a program for transcending the language barrier using artificial intelligence. In these 22 languages, we have been able to do a lot of work and we’ve been able to do a lot of work So it’s been a very the first place. We had our own challenges but the methodology which we followed was to collaborate with 70 research institutes across the country and the problem statement was actually divided in between all the 70 research institutes.

We were solving five problems to work on. First was automatic speech recognition that means the digital systems should be able to understand what we are speaking in all 22 languages. Then we are looking at the second piece of it which is text to text translation and again bidirectionally in all 22 languages. Third was text to speech which was basically again that the digital system should be able to speak to you that is again in 22 languages. And then we are looking at optical character recognition in 22 languages and also our digital, our dictionary. Which is basically the vocabulary in all 22 languages are not digital. So there was an attempt to digitize all the vocabulary which is around. That includes names of places, people, companies, etc., etc.

We have till now achieved 22 languages in all the modalities. We also have increased the number of languages. Incidentally, in India, there are 100 languages which are spoken or written by at least 100 ,000 plus people. So our journey is not complete when we do 22 languages. We are moving ahead with more languages. So we now have 36 languages on text and we are going to add more languages to move forward. We also have languages which don’t have script and those are basically in the tribal area. So we are attempting to digitize that also and that is being digitized. One of them has been digitized and will be launched in next few days. We. We also have. So in all of this, we had one basic challenge, which was non -availability of digital data.

So the non -availability of digital data, which is oil to the AI models, was basically done for the first time in the world as a brute force data, digital data collection. So what we had done was that we had about 200 -odd people who would go down on the field and, you know, speak to the people on a certain subject. Pick up a picture or any other things so that it becomes the topic of discussion. We will create the monolingual corpus by requesting them to write the same thing or bilingual corpus if they are, you know, having two languages. And that is how we build the bare minimal digital data. Obviously, when we have done these things, the model is like a child.

It only read 100 books, so it will be as intelligent as those 100 books. So we realized that… over a period of time we need to collect more data that means give the child thousand books so is more intelligent and more and that journey continues so we have taken AI as a journey but we haven’t waited for some things to become perfect so that we are in a position to launch them as a product we launched them and built a narrow use cases a narrow use cases in the sense that okay let’s build something for the farmers I will try to give two examples for want of time is one is that we have built up an interface for the farmers where farmers in their own language can ask a question about agricultural advisory and he is he or she is answered in that particular language so it’s a voice first and voice journey so that means I will be talking in voice and you know the answers will be coming on voice so that’s a voice journey sequence the other thing which we had actually experimented on our working is this is a deployed system so it is actually a very large system we are now we are working which is one of the things which have been displayed here is a project called Gyan Bharatam where you know the manuscripts have been made interactive.

Plus we have multiple other use cases perhaps I will come to them during the discussion but means we have about 20 odd of them displayed in

Nina Frey

Thank you Amitabh. Thank you so much and I somehow assumed everyone knows it but obviously I should introduce you as well so apologies for that. Mr. Amitabh Nag, he is the CEO of Bajini, the national language initiative and we will be collaborating Alex will be mentioning more later on on that. But before that, I would turn a year back to Paris, where obviously Current AI was started and came out of the Public Interest AI Working Group, if I think. So, Mrs. Aya Bedir, you say? Bedir, sorry for that. She’s the CEO, quite recent CEO of Current AI, a very, very important initiative that amongst others also wants to thank for the topic that we’re talking about.

But please, Aya, I know you come from a wrong background also in hardware. You are launching, I think, this afternoon something very impressive. That also helps… the importance of language diversity. Could you share with us some of your key focus interests and also why you so focus on hardware? Thank you.

Aya Bedir

Thank you so much for having me. So, my name is Aya Bedir. I, yes, did join recently, about a month and a half ago. Exactly. So really feeling the very warm welcome in India. Current AI was an initiative that came out of the French AI Summit. The founder, Martin Disney, was the special envoy of President Macron at the summit, and the initiative essentially has a vision for AI that is global, that is collaborative, and that is collective. And so the idea is that if we acknowledge that some of the biggest tech companies that are really governing our lives and really governing AI and the way we consume it in day -to -day, they are a handful of these companies, they are big, they have scale, they have a lot of financial resources, and they are very ambitious.

And so the initiative acknowledges that to be able to stand a chance to be an alternative, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and a counterpart to these large companies, we must fight scale with scale. And so obviously there is lots of interesting work happening in public interest AI around the world, but oftentimes the work is distributed, the work is decentralized, and sometimes it’s duplicative, and it’s not always additive.

And so as a result, current AI has this vision that we need to sort of bring together and bring more collaboration into the space, but also raise the level of ambition and of financial scale that is taken on. So current AI is a public -private partnership between philanthropy, between the private sector and government. It has initial commitments of about $400 million, but the ambition is to get to $2 .5 billion and hopefully more. The initial… commitments are from the French government. There are also partners, multiple other governments, including the Indian government, the Kenyan government, Moroccan, and many others, as well as from MacArthur Foundation, Ford Foundation, McGovern, and a few others, and the private sector, so Google DeepMind, Salesforce, and others.

So it really is a public -private partnership with the intention of kind of bringing everybody around the table that has sort of the same commitment to public interest AI, to AI that works for individuals and for the public good, and one of the main vehicles of doing that is really investing in open source. Language has been a priority for current AI ever since its inception. The initiative was called Multilingual Diversity, which I know is something everybody here is committed to, and we’ve been hearing a lot about over the past few days. I joined about a month ago, and I’m myself very passionate about the topic, and I sort of expanded the topic to be about culture, diversity, and culture preservation.

So it’s really not just about language. It’s also about acknowledging that culture exists in many facets. Language is one of them, but there are also behaviors, there are norms, there are also artifacts, physical and digital artifacts, and there are many things that are digitized and non -digitized. And so we now talk about culture preservation as one of our big priorities, and it’s something that we’ll be doing a lot of work in. As part of the culture preservation work, also when I came in, there had already been conversations between Current AI and Bashni about doing a collaboration together for the summit. And to be honest, I fell in love with the work that Amitabh and his team were doing and the care that they were taking with their diaries.

And I was like, oh, my gosh, this is so cool. and really the fact that they were going to sort of the source and getting a lot of this knowledge, not just data, this knowledge about the language from individuals and from communities themselves, no matter how small they were. And so we ended up collaborating on a device that will launch later today at 3 .30 in Room 10. I hope you all can attend. I’m not going to say much about it because there’s a drumroll situation that will happen. So you guys can come see. You all can come see for yourselves. But the intention of the device is to really get as far, as close as possible to the individuals and the communities themselves.

There is one concern I have that could be kind of a negative repercussion, I think, of having so much attention on multilingual diversity in a society. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. that a lot of the big companies and big players have to do all the work. And so, you know, it’s interesting and positive that, you know, the big tech companies are saying we’re going to make commitments to more multilingual diversity and more languages.

That’s good. But oftentimes when they are kind of in the leadership taking these positions, there’s a brute force kind of methodology that they deploy because of the scale at which they operate in. And so oftentimes it’s about scraping data. Oftentimes it’s about taking data without licensing it. It’s about treating individuals and communities as data, whereas they are people and they are not data. And so that’s sort of my concern in this area, and I believe that we have to get as close as possible to the communities themselves and invite them and support them in doing that kind of work themselves. So it’s really about them preserving their own. Their own cultures and languages. and not about us doing it for them in this sort of like somewhat condescending way.

I’ll also say one last thing, which is I myself grew up in Beirut in Lebanon, a very tiny country, but that everybody has heard of sometimes for good and not good reasons. But, you know, Arab language is also very concerned about AI and representation in AI, and we have thousands of different cultures and dialects within Arab culture, and we also have varying degrees of resource availability across Arab countries. Some countries are very resourced financially from a government perspective. Others have very scarce access to resources. So I’m also very concerned about thinking about AI that is more resilient, that operates from scarcity. operates from frugality and operates from a limited amount of resources and looking at that as a positive as opposed to a negative.

So that’s something that current AI will be prioritizing in a big way and we’ll hope to do more of. So hope to see you all at 3 .30 and hope

Nina Frey

Thank you so much, Aya. Let me hand over because you mentioned obviously the many announcements that were made as well from private companies to start collecting data. I think it’s fantastic to see that governments can do that as well and that you also invest in this PPP so far. And allow me to hand over to my colleague sitting to my left because I think you can also showcase how also public institutions like universities can also train a model multilingual from scratch. Scratch, not stretch. It was probably a stretch sometimes. Let me introduce you to Dr. Alex Ilich. He founded and is the executive director of the AI Center, a co -founder also of ICANN. And please, could you share your experiences with Apertus, which is this multilingual model, and maybe also mention something on Swiss AI and how the Indian languages we can maybe then present next year in Apertus.

Alex, please.

Alex Ilic

but basically we were able to build this model and one of the key bottlenecks that we also identified is it’s not just the infrastructure where currently a lot of money is going in but also the talent. Outside of big tech, you have maybe 100 people on the planet who have the experience and capabilities to build such foundation models and that’s not enough. And I think that’s something where academia can change it and I think that’s why it’s important we not just need supercomputer and data centers for the companies, we need it to empower academia. This is very, very critical that we also push this very, very strongly. We named the model Apertus, Latin for open, because we want it to be a foundation where everyone can take it and build on top of it.

So it’s not something that we force up on someone but it’s something that can be a thriving community where each university, each project, each country gets a step further. And I think we will hear later a little bit from the perspective of the Apertus Foundation and also from Singapore, from India. We already heard… There are not many countries that recognize how important that is as a public infrastructure that you really take it serious to develop your own benchmarks and your own data sources as well. Because today, still, if you read LinkedIn, whatever, the majority is driven by benchmarks that the big companies are publishing. And surprise, in every benchmark they publish, they are, of course, the best because they pick whatever metric is usable.

And I think this metric should be driven by what do we want it to be in the cultures and the regions to empower this. And so we have 1 ,000 languages included because we trained it with data on the Internet. As you know, the Internet is not the most diverse data source there is. 60 % of the data in our training set is English. 40 % is non -English. And so what we’re thinking about now strategically is how can we… Increase… the number of languages that are close to the performance we see in English, step by step for the next hundred languages and so on. And this is, I think, like important because many companies that are going in that area and say, oh, we sponsor a data collection effort, they just do it on best effort.

Like you, let’s do something and you don’t know does it actually move the needle. So the next step for us is that, you know, with all the experiences, you know, in Boschini and other parts, I think we can find out now very strategically how much does it cost us to raise the bar significantly, not just make a check mark out of that. So that will be also the hope for connecting forward through the mission of ICAIN and also Geneva next year that we can present also, you know, how far of a progress could we make, like where do we stand today that is really usable and economically usable and to elevate this. I think that’s super critical on that side.

And, yeah, we’re also very happy to be here. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. of sea lion that’s already built on Apertus, and we want to extend the collaborations now globally. For the researchers, we also have a very strong international program where we share basically our compute infrastructure. So that’s also very unique, and we would like to see also other countries to do that because we know that for where we stand with AI today, we’re maybe at 5 % or 10 % of the potential to train the next models that are, you know, including more data, becoming more aware of the physical world and so on.

We need more compute. We need to team up, and I think that’s also a question of how can we collaborate more and share more. And with ICANN, I think, like, in the beginning, we had this, like, the bottlenecks, you know, our compute, that’s why we have strong compute representation. It’s data and benchmarks, and it’s talent. And I think on these three capabilities, we need to jointly increase, and whoever doesn’t have it should be able to easily get the data and the benchmarks. And I think that’s where we’re headed from. sides to do it themselves basically. Thank you.

Markus Reubi

Thank you so much Alex and also for emphasizing the talent need and I think if I can just add that you mentioned the talent or the capabilities in knowing how to train a model something that and again I’m looking at the supercomputing representatives I mean it’s also a talent in knowing how to build up such an HPC so I think that’s something we could also add to the table but allow me to come back to the models themselves and the really very concrete application. Now I’m turning first left to the north to the Finns. Petri you’re here as obviously as a founding member of ICANN but those representing ELIS Network but also the team Finland if I can say that but you were also member of the age lab of the UN Secretary General so where you one of the recommendation was also exactly this that we collaborate and could you mention maybe more from the perspective of the Nordics, you had already your own language models, but maybe also you can share some thoughts on why you recommended that to the world, if I can

Petri Myllymäki

Yes, thank you. So happy to be here. So indeed, I mean, as you all know, Nordic languages are not the biggest major languages in the world. So obviously we take the kind of the preservation of our languages and cultures very seriously. Talking about the H -Lab of the UN, there was just upstairs a kind of a handover to the new International Independent Scientific Panel on AI. So maybe one thing I learned in this UN advisory body was that, I didn’t know this, but like one thing I learned was that access to language and culture is a human right, one of the human rights that all the countries in the world have. And I think that’s something that we have to accept it.

So to me this was a surprise and pleasant surprise because as also like language is already important because we operate with language. But like what Aja was saying, like even more important is the culture behind the language. We have different value frameworks and norms in different countries. So if there’s one size fits all English version AI that we all start to use, what is the value framework behind that? So that’s kind of I think this is a very critical issue. Another thing I learned in UN was that I mean like and that’s why is that there are several global initiatives towards like making this more accessible to all countries. Seven of the UN member states, 193 countries are included in all these initiatives.

119 countries are included in none. So initiatives like ICANN or current AI. So all these. this summit are very important to make this more inclusive. So I now shamelessly steal a quote from Joshua Benjio, who was just upstairs, saying, like, we need to make sure that all the countries in the world are invited to the dinner table, but not part of the menu, but they are dining guests. So I think this was hilarious.

Markus Reubi

Thank you so much. Thank you so much for sharing. I didn’t hear, but I think it’s a good thing that we can take up. Because also, obviously, food is very cultural diverse. Thank you. So let me turn from the north first more to the south. And to Singapore, you were also quite recently, I think, at the NTU Singapore, which is also the newest member of ICANN. You had already developed, and I think you will share something on the sea lion model, which is obviously, for the ASEAN region, the famous language model. but you also had already collaboration with Apertus. If time allows, because I would also allow, I need to speak, but if time allows, you could also mention something on the importance of sovereignty and language.

Please.

Participant

It’s wonderful. At NTU Singapore, we’re the newest members of ICAIN, but it’s fantastic that the… And I’ve only been at NTU Singapore for six months, but the conversation that we’re having here is the same conversation that we’re having, about the importance of multilingual diversity, the importance of getting close to the ground, the importance of culture as well as tech. And I’m the dean of a college of humanities, arts and social sciences. I’m a historian. And it’s where my college is in the lead, collaborating with computer science at NTU and engaging with CLI and thinking about AI in the context we’re talking about. So… So, you know, I just point to very, very important… that it’s about culture and thinking about cultural diversity and how AI models, et cetera, reflect culture, how we engage with culture and history, et cetera, as well as simply technology.

And I think that’s something that’s very evident in this conversation. So it’s great to be part of this club. So C -Line is a language model that reflects 13 languages across Southeast Asia. In fact, it includes Tamil, because Tamil is a Southeast Asian language because it’s an important national language within Singapore, with aspirations actually to kind of expand, potentially connect beyond to other parts of Asia. It’s a nationally funded initiative, but it connects from Singapore. So it’s part of Singapore’s public infrastructure, but then it connects regionally and is used by, has good connections with private sector providers across Asia. Southeast Asia platform. in Indonesia on various things, etc. So it’s, and you know, as we were hearing a moment ago, I mean there’s a number of different versions of it, one of them is built on Apertus, so there’s a real synergy here.

And I think, you know, I just want to flag the connection between Singapore and Switzerland, they’re both multilingual, multicultural, you know, kind of relatively small societies, so there’s a very obvious kind of, there’s a very obvious collaboration there. And I think another, echoing again, something that I was saying is, when we’re thinking about AI and we’re thinking about the relationship between culture and language, we’re also interested in frugality, we’re also interested in using resources effectively, and in thinking about how we can you know, draw on sort of deep truths about language and culture without vast amounts of data, you know, kind of with relatively small amounts of data. I mean, one of the, you know, we have languages like Laotian, Khmer, etc.

within C -Line, and so, you know, colleagues are really thinking hard about how you leverage relatively small amounts of language to then produce an effective model in the discussions. Just a couple of additional points, and I’m looking at the clock. Sovereignty is the big word within the AI summit. I’m a historian actually in some ways of sovereignty at the moment. Sovereignty means power. It’s a power that we want for ourselves, for our communities, for our nations and states. But in a sense it’s also about individuals as well, and there’s a kind of complicated relationship between those two things. And so I just wanted to reflect on the importance of sovereignty in that we’re talking about the sovereignty of societies that are neither the US or China in this discussion.

The two big superpowers maybe. And this discussion is about how we can think about a world that is multipolar and is multicolored. and reflects the fact that sovereignty actually is dispersed in the world in which we live in, and that’s very important. And that’s Indian principles of non -alignment that go back to the 1940s and so on. So I don’t know if I’m allowed to use that phrase in today’s India. But anyway, it’s a similar set of principles that we’re talking about. So the dispersal of sovereignty that we’re talking about here and power is important, but it’s also as part of that, I think, to reflect the limits of the nation state, I suppose, and the limits of national approaches to language.

In that, we all live in environments in which people speak complicated… They’re multilingual societies in a minute -by -minute way. People code -switch. They’re speaking Hindi in one minute. They’re speaking English the next, Swiss, German, etc. Similarly, in Singapore, people speak Mandarin, Chinese, then they speak a Chinese dialect, and then they’ll speak English. And so the… Sovereignty is crucial, but we need to… If we’re interested in the… sovereignty of the individual and the power of individuals, then we need to have a more nuanced account of language that allows for things like code switching, and dialects, etc. And that’s something that we’re very much interested in in NTU.

Markus Reubi

Thank you so much, and thank you to all the speakers. Also allow Annie to speak, because I think you’re obviously from South Africa, but now living in Switzerland and the US, and you lead these linkages between medicine and the AI, and I always think you explain very concretely in your work what if you just take an English -speaking language and train it in a tiny set of local data lakes, how you experience that in the medical field in reality. So if you could share something on that, and obviously also your role in ICANN. Thank you, Annie. Professor Annie, she’s at EPFL at the moment in the interview. She’s in Yale. Thank you.

Annie Hartley

Yeah, so thank you very much, and I think I’ll take it down to the ground then about the consequences about what happened. happens like really when you are at a patient’s bedside and you ask questions that are high stakes. And something that I do to just test these models in different places because we are rolling out these tools in different hospitals around the world, I ask the same question which is a very high stakes question of how to treat diabetic ketoacidosis which is a diabetic crisis in a child. And I did this recently in Ethiopia in a language that’s not very well known, Afanaromo, and it responded to me, thou shalt not eat insulin on a Tuesday.

And I did share this advice with the, because I thought it was actually very good advice, you should not eat insulin any day actually. But I did share this advice. But it comes to something that’s really, really important. I’m stating the obvious. But it means that if you do not, because it’s obviously only trained on the Bible, right? That’s something that’s very available. That’s the one. book that is available in every single language in the world and so you have these biblical kind of terms but the Bible isn’t like very necessarily very accurate in medicine or other things but but depending on where you’re coming from but the thing is that you can’t rely on these models to make these decisions because they are inequitably inaccurate in the places that need it most so we know that they’ll be inaccurate but the point is that we actually have to if we collecting this kind of information we have to make an effort to collect it in the highest stakes environments and in those contexts so if you have use cases for collecting language it’s interesting to collect it in maybe like historical texts or to represent culture of course but I think something that has a much bigger urgency are the urgent questions these are high -stakes decisions that we are making and people will believe that the model performs well if it only speaks the language but they might get sense of security if we don’t really train it to be accurate in the questions that people are relying on these tools for the most.

And so this is why we actually have to, when we collect languages and when we are trying to test these tools in reality, we have to make sure that we represent those kinds of contexts. And that’s what we are doing. So I lead a lab called LIGHTS. It’s the Laboratory for Intelligent Global Health and Humanitarian Response Technology. So obviously I’m interested in these high -stakes environments and these cultures that are so underrepresented that they will never be represented with any kind of large commercial enterprise, right? No commercial entity has ever said there’s a great place to make money and it’s that war zone. Okay, unfortunately they have, I suppose. But the point is that people don’t want to represent that kind of place because it’s not in their interest.

And this means that it is so important for academia to play a role. We don’t just play a role because we’ve got expertise. We don’t just play a role because we have expertise. We play a role because we do something that commercial… entities cannot provide, it is, we are neutral and we create a neutral space for this kind of collection of data to represent the needs of people and also to make sure that we can test it in reality, right? This is why we can do open science, it’s because we don’t have like any money in the game to lose, right? And the most important thing to do is actually to see when we do represent these languages, not just to represent them and be happy about it, which is the first step, but to go the extra mile to actually test whether the languages are being represented as you expect them to be.

So some of like, like some of my patients for example, it might speak their language but does it speak their language in the way that they expect and do they follow the advice or don’t they? And this is a really important thing to test in these high -stakes environments. My patients will come to me in South Africa. In South Africa we speak 11 official languages, and in Kaga, a way of explaining certain things, it’s very different, and it gets translated into English in a strange way sometimes. And so one of my patients came to tell me, you know, I’ve got elephants running in my head, right? I know exactly how to respond because that’s my culture.

I’m South African. But what would an AI respond, right? And I have a pregnancy in my knee, right? I’m pregnant in my knee. That’s what the patient came to tell me. And actually it doesn’t come from a mistranslation. It comes from the way that people understand how their bodies work. And this is very, very cultural. What is the next most likely word after pregnancy in my knee, right? So it’s really important that we understand how it works when it’s in our body. And we understand how it works when it’s in our body. And we understand how it works when it’s in our body. and making sure that we get feedback from reality, this is what we’re trying to do.

So we have, starting with ICAIN, a flagship project that we made. It’s called MOVE. It stands for Massive Open Online Validation and Evaluation. And it’s about getting these real -world signals from real people in high -stakes decision -making processes, from our doctors, from the people on the ground in different countries around the world, and to get that information from how they are using any tool, because we are neutral. If any tool comes out, any new model, we can test it. And then we get how it works, and then when it breaks, we don’t just say, oh, this model is bad in the setting and this model is good. We really try to get that information and put it back into the model to continuously improve it.

And so learning from reality, learning from the real workflows of how people use models. And I think that’s important, to represent reality. And not just the language. but the reality that the language functions in. So last thing I’d like to say about this is this does cost a little bit more money and it’s not the traditional kind of way of working in science, and people don’t appreciate that implementation science is science. And it’s such a fantastic opportunity where we can actually do impact, like actually measure it, the impact of the models that we are making, we can measure it and feed that back into our models and really create impact driven models. And to run these trials, it’s ambitious, but we do need to start asking like different kinds of funding and being more ambitious, and I think academia does need to be more ambitious because we are representing something that’s actually very important these days, which is, and very rare, which is this neutrality.

When OpenAI updates it from 4 .5 to 5 or more, it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. point one, did they ask your commission? No, right?

Did they ask the doctors who had validated those models for their context? No. We need control. We need to know how these tools work in reality and we need to be able to control the tools and so sovereignty for me is control of tools and control of the environment and to understand how these models work in reality so that we

Markus Reubi

Thank you. Thank you so much. Thank you, honestly. Thank you, everyone. Thank you, everyone, for keeping the time and for making sure that we are actually creating the menu and controlling the menu to also steal Professor Benjo’s words and for contributing here and I think we will be more than happy to update you hopefully next year on our more important work. Thank you, everyone, for joining Collaborative Verge. Thank you, everyone, for coming. Thank you for coming, for staying with us, and on the speakers. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Markus Reubi opened the summit and framed multilingual AI as a democratic imperative, warning that linguistic exclusion is a persistent barrier to digital participation.”

The knowledge base explicitly states that linguistic exclusion is a persistent barrier and that multilingual access is a democratic imperative, confirming the framing described in the report [S1].

Confirmedhigh

“The summit is situated within a trajectory that includes the Paris 2025 public‑interest AI process, the India AI Summit 2026 and the planned Geneva AI Summit 2027, underscoring Switzerland’s commitment to continuous, cooperative global AI governance.”

The Geneva AI Summit 2027 and Switzerland’s role in fostering continuity and cooperation in AI governance are confirmed in the knowledge base [S24].

Confirmedhigh

“India’s Barshini Initiative is cited as an early example of collaborative multilingual effort.”

The Barshini Initiative is mentioned in the knowledge base as a notable multilingual project originating from India [S24].

Confirmedhigh

“Nina Frey, Executive Director of ICANN, introduced the panel.”

Nina Frey’s role as Executive Director of ICANN is confirmed in the transcript excerpt from the knowledge base [S9].

Additional Contextmedium

“Bhashini initially targets the 22 languages of India’s Eighth Schedule, providing ASR, bidirectional text‑to‑text translation, TTS, OCR and a digital dictionary; it has since expanded to 36 languages and deployed roughly 200 field teams to create monolingual and bilingual corpora.”

The knowledge base discusses the Bhashini stack and its application for multilingual access, adding detail about its language coverage and data-collection efforts, which supports and expands on the report’s description [S11].

!
Correctionmedium

“The speaker’s name is given as “Markus Reubi”.”

The knowledge base spells the name as “Markus Reuvi”, indicating a typographical error in the report’s spelling [S9].

External Sources (80)
S1
How Multilingual AI Bridges the Gap to Inclusive Access — Petri Myllymäki from the Finnish Supercomputing Centre and ELIS Network emphasized that access to language and culture i…
S2
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S3
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S4
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S5
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen, Mary-Anne (“Annie”) Hartley, Mennatallah El-Assady Katharina Frey: Thank you so much. I think you, oh …
S6
How Multilingual AI Bridges the Gap to Inclusive Access — – Alex Ilic- Annie Hartley – Aya Bedir- Annie Hartley
S7
How Multilingual AI Bridges the Gap to Inclusive Access — -Torsten Schwede- President of the Swiss National Science Foundation, involved in Indo-Swiss research collaboration
S8
How Multilingual AI Bridges the Gap to Inclusive Access — as a bridge to democratic access. Switzerland is very pleased to contribute to this global conversation at a pivotal tim…
S9
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services a…
S10
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S11
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S12
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — And so as a result, current AI has this vision that we need to sort of bring together and bring more collaboration into …
S13
How Multilingual AI Bridges the Gap to Inclusive Access — Thank you Amitabh. Thank you so much and I somehow assumed everyone knows it but obviously I should introduce you as wel…
S14
How Multilingual AI Bridges the Gap to Inclusive Access — 1197 words | 161 words per minute | Duration: 445 secondss Thank you so much for having me. So, my name is Aya Bedir. I…
S15
How Multilingual AI Bridges the Gap to Inclusive Access — Agreed with:Markus Reubi, Aya Bedir, Alex Ilic, Petri Myllymäki — Need for international collaboration and resource shar…
S16
How Multilingual AI Bridges the Gap to Inclusive Access — – Markus Reubi- Amitabh Nag- Alex Ilic- Petri Myllymäki- Participant – Markus Reubi- Torsten Schwede- Aya Bedir- Alex I…
S17
How Multilingual AI Bridges the Gap to Inclusive Access — – Alex Ilic- Annie Hartley- Nina Frey – Markus Reubi- Amitabh Nag- Alex Ilic- Petri Myllymäki- Participant – Markus Re…
S18
How Multilingual AI Bridges the Gap to Inclusive Access — Thank you Amitabh. Thank you so much and I somehow assumed everyone knows it but obviously I should introduce you as wel…
S19
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — I will introduce you and hand over the mic to you. In a minute. Please have a seat. turned out there was a seat for me y…
S20
How Multilingual AI Bridges the Gap to Inclusive Access — As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services a…
S21
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S22
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Andrea Jacobs: So that’s a very, very good question. And, you know, I’ve heard a lot of unpacking from different regions…
S23
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — These key comments fundamentally transformed the discussion from a conventional ‘skilling’ conversation to a more sophis…
S24
https://app.faicon.ai/ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — In that, we all live in environments in which people speak complicated… They’re multilingual societies in a minute -by…
S25
Open Forum #33 Building an International AI Cooperation Ecosystem — **Professor Dai Li Na** from the Shanghai Academy of Social Sciences presented a comprehensive case study of Shanghai’s …
S26
Multistakeholder Model – Driver for Global Services and SDGs | IGF 2023 Open Forum #89 — By involving a diverse range of stakeholders, including governments, civil society, and the private sector, ICANN foster…
S27
Main Session | Policy Network on Internet Fragmentation — Amitabh Singhal: Good morning, and thank you for having me here. And thank you, Wim, for the summarization of the dis…
S28
AI as critical infrastructure for continuity in public services — So the participation of the community into that, in ensuring that the innovation and the policy level align with the nee…
S29
India’s AI Future Sovereign Infrastructure and Innovation at Scale — This panel discussion focused on India’s sovereign AI capabilities and the country’s strategy for building AI infrastruc…
S30
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S31
Keynote-Rishad Premji — For India, this moment represents an unprecedented opportunity to become “one of the world’s most consequential environm…
S32
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — In summary, the analysis raises critical concerns regarding data protection, privacy, and ethical considerations. It und…
S33
The Mind and the Machine — In conclusion, the neurotechnology market is experiencing significant growth, with advancements in implanted neurotechno…
S34
High-Level Session 4: From Summit of the Future to WSIS+ 20 — Walton raises concerns about the ethical implications of AI and other emerging technologies. He emphasizes the need for …
S35
Democratizing AI Building Trustworthy Systems for Everyone — I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement t…
S36
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI e…
S37
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S38
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — Shivnath Thukra: Thanks to you and thanks for inviting me, Meta from India on this panel. I will, in the spirit of bein…
S39
Diplomatic policy analysis — Ethical frameworks for data usage:The development of international norms and standards for data collection and analysis …
S40
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — The analysis covers various arguments and stances on data protection, privacy, and ethical concerns. One argument raises…
S41
Artificial Intelligence &amp; Emerging Tech — Pamela Chogo:Yes, can you hear me? Yes, we can hear you, Pamela. Okay, thank you. As mentioned, my name is Pamela from T…
S42
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — James Amate, from Ghana, highlighted the challenges in implementing data cooperatives, particularly in developing region…
S43
How Multilingual AI Bridges the Gap to Inclusive Access — Cultural preservation, sovereignty, and ethical considerations He argues that respecting sovereignty means acknowledgin…
S44
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S45
Democratizing AI Building Trustworthy Systems for Everyone — “So we’re pleased to announce a Lingua Africa initiative where we are working with local communities in partnership with…
S46
Inclusive AI_ Why Linguistic Diversity Matters — The France-India partnership exemplified how countries with complementary strengths can collaborate to enhance rather th…
S47
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Yu Ping Chan: Thank you so much to the organizers for having me here today. So I represent the United Nations Developmen…
S48
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S49
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S50
WS #462 Bridging the Compute Divide a Global Alliance for AI — Jason Slater: No, thank you. Well, linking it to what you just mentioned there and Gavi, we have, as I see a global digi…
S51
AI That Empowers Safety Growth and Social Inclusion in Action — Multi-stakeholder collaboration is needed to move from framework-heavy concepts to actionable implementation
S52
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standard…
S53
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Collective vs Individual Approaches Local context requires more than superficial consultation – it means empowering loc…
S54
Balancing act: advocacy with big tech in restrictive regimes | IGF 2023 — Cagatay Pekyorur:Of course, I’ll try my best. First of all, I would like to start by thanking you for inviting me to thi…
S55
Why science metters in global AI governance — Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not…
S56
Deepfakes and the AI scam wave eroding trust — Organizations also need to rethink workflows that assume everything they see or hear is real. High-risk decisions, such …
S57
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur: Yeah, great. Thank you, Marlena. And thanks for the invitation to…
S58
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Cost reduction in technology deployment In summary, Google proposes that an open yet careful strategy towards technolog…
S59
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Sofiya Zahova: Thank you, Davide. I’m honored and delighted to join you today on this important panel, but even more ple…
S60
How Small AI Solutions Are Creating Big Social Change — models from 4 billion to 15 billion. And once we select the best LLM for one target language, we do all these recipes to…
S61
How Multilingual AI Bridges the Gap to Inclusive Access — This comment reframes multilingual AI from a technical problem to a fundamental democratic and human rights issue. It es…
S62
How Multilingual AI Bridges the Gap to Inclusive Access — “AI can only serve the public good if it serves all languages and all cultures.”[1]. “Today, linguistic exclusion remain…
S63
AI as critical infrastructure for continuity in public services — Data is siloed, data is not ready for AI scale. There’s no governance built around data. And that’s why POCs, you use a …
S64
Inclusive AI_ Why Linguistic Diversity Matters — Bhashini’s remarkable journey, beginning in 2023, demonstrated impressive rapid development to support 15 million daily …
S65
India’s AI Future Sovereign Infrastructure and Innovation at Scale — This panel discussion focused on India’s sovereign AI capabilities and the country’s strategy for building AI infrastruc…
S66
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Abhishek Agarwal: Thank you, Minister. Abhishek? Yeah, I kind of echo the views of Her Excellency, like the three key in…
S67
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
S68
WS #41 Big Techs and Journalism: Disputes and Regulatory Models — In Indonesia, some journalists are using AI tools like ChatGPT to generate news content, which they then edit and publis…
S69
The Mind and the Machine — In conclusion, the neurotechnology market is experiencing significant growth, with advancements in implanted neurotechno…
S70
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — In summary, the analysis raises critical concerns regarding data protection, privacy, and ethical considerations. It und…
S71
Democratizing AI Building Trustworthy Systems for Everyone — I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement t…
S72
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — Shivnath Thukra: Thanks to you and thanks for inviting me, Meta from India on this panel. I will, in the spirit of bein…
S73
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Sarim Aziz: At the risk of contradicting Matisse, but just to say yes, I mean, that’s one option. But I think the ans…
S74
WS #208 Democratising Access to AI with Open Source LLMs — Melissa Muñoz Suro: So basically, building on what I was mentioning earlier about our national AI strategy back in the D…
S75
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S76
test marko — Schneider agreed with the need for better cooperation and cross-referencing among Geneva-based institutions. He emphasiz…
S77
AI for agriculture Scaling Intelegence for food and climate resiliance — A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of a…
S78
AI Meets Agriculture Building Food Security and Climate Resilien — But what was felt was that while we had initiated this process to ensure that the bureaucratic red tapism is removed, wh…
S79
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — And as you are… We are aware in the Netherlands that strong ICT ecosystems and highly innovative agricultural ecosyste…
S80
AI for agriculture Scaling Intelegence for food and climate resiliance — It took a lot of effort and as Soumya said, creation of a massive ecosystem. Germany, which pioneered this, lost that ra…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Markus Reubi
1 argument122 words per minute932 words458 seconds
Argument 1
AI must serve all languages and cultures to ensure democratic participation
EXPLANATION
Markus Reubi argues that artificial intelligence can only benefit the public good if it is inclusive of every language and cultural context. He frames multilingual access not merely as a technical hurdle but as essential for democratic participation and equity.
EVIDENCE
He states that AI can only serve the public good if it serves all languages and all cultures, and that linguistic exclusion is a persistent barrier to digital participation, making multilingual access a democratic imperative [4-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The claim that AI can only serve the public good if it serves all languages and cultures, and that linguistic exclusion is a persistent barrier to democratic digital participation, is documented in the discussion summary [S1] and reinforced as a democratic imperative in the commentary [S8].
MAJOR DISCUSSION POINT
Multilingual AI as a democratic imperative
AGREED WITH
Petri Myllymäki, Participant
P
Petri Myllymäki
1 argument150 words per minute332 words132 seconds
Argument 1
Language access is a fundamental human right; inclusive initiatives are essential
EXPLANATION
Petri Myllymäki emphasizes that access to language and culture is a recognized human right, drawing on his experience with UN advisory bodies. He calls for inclusive policies that protect linguistic diversity as a core component of human rights.
EVIDENCE
He notes that during his work with the UN advisory body he learned that access to language and culture is a human right, and stresses the importance of preserving Nordic languages and cultures as part of that right [226-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Petri Myllymäki’s emphasis that access to language and culture is a recognized human right is explicitly noted in the external commentary on multilingual AI [S8].
MAJOR DISCUSSION POINT
Language access as a human right
AGREED WITH
Markus Reubi, Participant
T
Torsten Schwede
1 argument141 words per minute800 words338 seconds
Argument 1
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework
EXPLANATION
Torsten Schwede announces three new Indo‑Swiss joint research calls covering geosciences, social sciences, and One Health, and introduces a longer‑term Indo‑Swiss Research Framework Programme. He positions these initiatives as a way to deepen bilateral scientific cooperation and address cross‑border challenges.
EVIDENCE
He announces three new calls for joint research projects in geosciences, social sciences, and One Health, and describes the launch of a new Indo-Swiss Research Framework Programme to provide ongoing support for collaborative research [27-34][41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Torsten Schwede’s announcement of three new Indo-Swiss joint research calls and the longer-term Indo-Swiss Research Framework Programme is recorded in the external source summarising the session [S8].
MAJOR DISCUSSION POINT
Indo‑Swiss research collaboration and funding programmes
AGREED WITH
Markus Reubi
A
Amitabh Nag
1 argument162 words per minute811 words300 seconds
Argument 1
Bhashini (Bhasha Interface for India) delivers speech, text, OCR and dictionary services in 22+ languages, with farmer advisory use cases
EXPLANATION
Amitabh Nag describes the Bhashini initiative, which provides automatic speech recognition, text‑to‑text translation, text‑to‑speech, OCR and a digital dictionary across 22 Indian languages, later expanded to 36. He highlights concrete applications such as a voice‑first agricultural advisory system for farmers.
EVIDENCE
He outlines Bhashini’s five functional modules-automatic speech recognition, bidirectional text translation, text-to-speech, OCR, and a digital dictionary-covering 22 languages and later 36, and cites a farmer-advisory voice interface as a real-world use case [78-86][95-98][107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The description of Bhashini’s five functional modules and its voice-first agricultural advisory system for farmers appears in the external summary of the discussion [S1].
MAJOR DISCUSSION POINT
Specific multilingual AI initiatives and technical progress
A
Alex Ilic
2 arguments188 words per minute770 words245 seconds
Argument 1
Apertus is an open, multilingual foundation model; collaboration with other projects expands language coverage
EXPLANATION
Alex Ilic explains that the Apertus model is an open, multilingual foundation model designed for community use and built with data covering roughly 1,000 languages. He notes collaborations with projects such as C‑Line and the broader ICAIN network to broaden language support.
EVIDENCE
He describes Apertus as a fully open multilingual foundation model named after the Latin word for ‘open’, trained on internet data that includes 1,000 languages, and mentions its integration with other initiatives like C-Line to increase language coverage [187-194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Alex Ilic’s presentation of Apertus as a fully open, multilingual foundation model trained on data covering roughly 1,000 languages, and its collaborations with projects such as C-Line, is detailed in the external commentary [S8].
MAJOR DISCUSSION POINT
Specific multilingual AI initiatives and technical progress
AGREED WITH
Markus Reubi, Participant
DISAGREED WITH
Participant
Argument 2
Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks
EXPLANATION
Alex Ilic highlights a critical shortage of experts capable of building large foundation models, estimating only about 100 qualified individuals worldwide outside big tech. He calls for increased academic access to compute, data, and culturally appropriate benchmarks to close this gap.
EVIDENCE
He notes that only around 100 people globally have the expertise to build foundation models, stressing the need for academia to gain compute resources, data, and benchmarks, and points to current imbalances in benchmark creation by large companies [183-186][210-218].
MAJOR DISCUSSION POINT
Talent, compute, benchmarks, and the role of academia
AGREED WITH
Markus Reubi, Participant
DISAGREED WITH
Annie Hartley
P
Participant
2 arguments158 words per minute797 words301 seconds
Argument 1
C‑Line model for Southeast Asian languages, built partly on Apertus, emphasizes frugal data use and regional sovereignty
EXPLANATION
The participant describes the C‑Line language model, which supports 13 Southeast Asian languages (including Tamil) and is partly built on Apertus. The model is designed to work with limited data resources, reflecting a frugal approach and supporting regional sovereignty in AI development.
EVIDENCE
They explain that C-Line covers 13 Southeast Asian languages, is a nationally funded initiative, leverages Apertus for synergy, and stresses the need to achieve effective models with relatively small data amounts while preserving regional sovereignty [250-262][263-268].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The C-Line language model supporting 13 Southeast Asian languages, its reliance on Apertus data, and its frugal-data, sovereignty-focused design are described in the discussion summary [S1].
MAJOR DISCUSSION POINT
Specific multilingual AI initiatives and technical progress
AGREED WITH
Alex Ilic, Markus Reubi
DISAGREED WITH
Alex Ilic
Argument 2
Sovereignty of societies and individuals requires nuanced handling of code‑switching and cultural contexts
EXPLANATION
The participant argues that true sovereignty in AI must consider the multilingual, code‑switching realities of societies and respect individual cultural contexts, rather than imposing a single dominant language model. This perspective calls for AI systems that reflect the diversity of language practices across nations.
EVIDENCE
They discuss the importance of sovereignty for societies not aligned with major powers, the prevalence of code-switching in daily life, and the need for AI to accommodate such linguistic fluidity [265-272][277-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to respect linguistic complexity such as code-switching in multilingual societies, and the broader notion of cultural sovereignty, is discussed in the external notes on multilingual dynamics [S24] and cultural preservation [S1].
MAJOR DISCUSSION POINT
Governance, ethics, and community‑centric data collection
AGREED WITH
Markus Reubi, Petri Myllymäki
A
Aya Bedir
1 argument161 words per minute1197 words445 seconds
Argument 1
Public‑private partnership (Current AI) must avoid big‑tech data scraping; prioritize community‑led language and culture preservation
EXPLANATION
Aya Bedir stresses that the Current AI public‑private partnership should not rely on large‑scale data scraping by big tech, but instead empower communities to lead language and cultural preservation efforts. She warns against treating individuals as mere data sources and calls for respectful, community‑driven data collection.
EVIDENCE
She outlines Current AI’s PPP structure involving governments, foundations, and companies, and then expresses concern that big-tech approaches often scrape data without licensing, treating communities as data rather than people, advocating for community-centric preservation [126-133][158-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aya Bedir’s warning against large-scale data scraping by big tech and her call for community-centric language and cultural preservation are highlighted in the external commentary [S8].
MAJOR DISCUSSION POINT
Governance, ethics, and community‑centric data collection
AGREED WITH
Nina Frey, Alex Ilic, Torsten Schwede
DISAGREED WITH
Alex Ilic
N
Nina Frey
1 argument125 words per minute827 words394 seconds
Argument 1
ICANN network fosters collaborative, inclusive governance across academia and policy actors
EXPLANATION
Nina Frey describes the ICANN network as a multiregional collaboration linking academic partners from Europe, Africa, and Singapore, and emphasizes its role in bringing together diverse stakeholders for inclusive AI governance. She positions ICANN as a platform for open, collaborative innovation.
EVIDENCE
She notes that ICANN links academic partners across continents and hosts a large panel of representatives, highlighting its multistakeholder nature and its focus on language and cultural diversity across summits [57-60][68-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The multistakeholder nature of the ICANN network, linking academic partners across regions and promoting inclusive AI governance, is outlined in the external source on multistakeholder models [S26].
MAJOR DISCUSSION POINT
Governance, ethics, and community‑centric data collection
AGREED WITH
Aya Bedir, Alex Ilic, Torsten Schwede
A
Annie Hartley
1 argument185 words per minute1419 words459 seconds
Argument 1
Medical example shows AI can give dangerous advice if not validated; LIGHTS and MOVE projects collect real‑world feedback to improve models
EXPLANATION
Annie Hartley provides a concrete medical case where an AI model gave unsafe advice for diabetic ketoacidosis, illustrating the risks of unvalidated AI in high‑stakes settings. She then describes the LIGHTS lab and the MOVE project, which gather real‑world validation data to continuously improve models.
EVIDENCE
She recounts an instance where the model suggested “do not eat insulin on a Tuesday,” highlighting the danger of relying on AI trained on limited data, and explains that the LIGHTS lab and MOVE initiative collect real-world signals from clinicians to test and refine models [287-295][299-330].
MAJOR DISCUSSION POINT
Real‑world validation and high‑stakes impact
DISAGREED WITH
Alex Ilic
Agreements
Agreement Points
Multilingual AI is essential for democratic participation and a human right
Speakers: Markus Reubi, Petri Myllymäki, Participant
AI must serve all languages and cultures to ensure democratic participation Language access is a fundamental human right; inclusive initiatives are essential Sovereignty of societies and individuals requires nuanced handling of code‑switching and cultural contexts
All three speakers stress that language inclusion is not merely technical but a democratic imperative and a recognized human right, and that respecting linguistic diversity and sovereignty is crucial for equitable digital participation [4-5][226-229][265-272][277-281].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes the WSIS Action Line C8 commitment to language rights and inclusive information societies, and aligns with UN-led discussions on multilingualism as a human right [S59][S43].
Collaborative, multistakeholder approaches are needed to advance multilingual AI
Speakers: Nina Frey, Aya Bedir, Alex Ilic, Torsten Schwede
ICANN network fosters collaborative, inclusive governance across academia and policy actors Public‑private partnership (Current AI) must avoid big‑tech data scraping; prioritize community‑led language and culture preservation Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework
The speakers highlight the importance of cross-sector collaboration-between academia, governments, and private partners-to fund, develop, and govern multilingual AI responsibly, avoiding top-down data practices and addressing talent and resource gaps [57-60][68-71][126-133][158-164][183-186][210-218][27-34][41-44].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder governance is highlighted in UNDP’s global digital compact and IGF recommendations, emphasizing cooperation among governments, civil society and industry [S47][S50][S52].
Open, multilingual foundation models and shared resources accelerate inclusive AI
Speakers: Alex Ilic, Markus Reubi, Participant
Apertus is an open, multilingual foundation model; collaboration with other projects expands language coverage A notable example is Apertus, which maybe many of you have heard of. It was developed by ETH Zurich and EPFL, fully open and transparent multilingual model designed to support public interest applications across diverse linguistic communities C‑Line model for Southeast Asian languages, built partly on Apertus, emphasizes frugal data use and regional sovereignty
All three emphasize open-source multilingual models-Apertus and its extensions like C-Line-as public infrastructure that can be leveraged across regions to broaden language coverage while using limited data resources [187-194][13-15][250-262].
POLICY CONTEXT (KNOWLEDGE BASE)
The open-source debate and the “open sovereignty” model advocate shared foundations to reduce dependency on proprietary tech, as discussed by Google and in the Lingua Africa initiative [S58][S45][S44].
Talent, compute, and benchmark gaps must be addressed to enable academia to build multilingual models
Speakers: Alex Ilic, Markus Reubi, Participant
Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks Thank you so much Alex and also for emphasizing the talent need and I think if I can just add that you mentioned the talent or the capabilities in knowing how to train a model something that and again I’m looking at the supercomputing representatives I mean it’s also a talent in knowing how to build up such an HPC so I think that’s something we could also add to the table We are also interested in frugality, we’re also interested in using resources effectively, and in thinking about how we can you know, draw on sort of deep truths about language and culture without vast amounts of data, you know, kind of with relatively small amounts of data
The speakers converge on the need to close the talent and resource gap-highlighting the scarcity of experts, the necessity of compute infrastructure, and the importance of efficient, frugal data approaches for academia to develop multilingual AI [183-186][210-218][220-221][263-268].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on the compute divide and the low-resource language crisis call for investment in talent, infrastructure and benchmarks to bridge academic gaps [S48][S57][S60].
Funding mechanisms and joint research programmes support multilingual AI development
Speakers: Torsten Schwede, Markus Reubi
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework We will begin with the announcement of the launch of the three new joint calls under the lead of the Indo‑Swiss Joint Research Programme, JRP, which is making a further strengthening of our bilateral ties in science, innovation and research between Switzerland and India
Both speakers announce coordinated funding calls and a longer-term research framework to foster bilateral collaboration on multilingual AI and related domains [27-34][41-44][17-19].
POLICY CONTEXT (KNOWLEDGE BASE)
Partnerships such as the France-India collaboration and the Gates-Funded Lingua Africa programme illustrate how joint funding accelerates multilingual research [S46][S45][S50].
Similar Viewpoints
Both frame multilingual access as a democratic and human‑rights imperative rather than a mere technical issue [4-5][226-229].
Speakers: Markus Reubi, Petri Myllymäki
AI must serve all languages and cultures to ensure democratic participation Language access is a fundamental human right; inclusive initiatives are essential
Both stress the necessity of real‑world validation and robust benchmarking to ensure safe, reliable AI outcomes, especially in high‑stakes contexts [183-186][210-218][287-295][299-330].
Speakers: Alex Ilic, Annie Hartley
Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks Medical example shows AI can give dangerous advice if not validated; LIGHTS and MOVE projects collect real‑world feedback to improve models
Both caution against top‑down, data‑scraping approaches and advocate for community‑centric, sovereign control over language data and AI systems [158-164][265-272][277-281].
Speakers: Aya Bedir, Participant
Public‑private partnership (Current AI) must avoid big‑tech data scraping; prioritize community‑led language and culture preservation Sovereignty of societies and individuals requires nuanced handling of code‑switching and cultural contexts
Unexpected Consensus
Community‑led data collection and sovereignty versus big‑tech data scraping
Speakers: Aya Bedir, Participant
Public‑private partnership (Current AI) must avoid big‑tech data scraping; prioritize community‑led language and culture preservation Sovereignty of societies and individuals requires nuanced handling of code‑switching and cultural contexts
While Aya focuses on avoiding large-scale scraping by big tech, the Participant emphasizes sovereignty and community control, revealing an unexpected alignment on the need for grassroots data practices and resistance to dominant corporate models [158-164][265-272][277-281].
Need for real‑world validation in high‑stakes domains
Speakers: Annie Hartley, Alex Ilic
Medical example shows AI can give dangerous advice if not validated; LIGHTS and MOVE projects collect real‑world feedback to improve models Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks
Annie highlights concrete failures in medical AI without validation, while Alex calls for better benchmarks and academic capacity-both converging on the importance of rigorous, real-world testing, an area not explicitly addressed by other speakers [287-295][299-330][183-186][210-218].
POLICY CONTEXT (KNOWLEDGE BASE)
High-stakes AI governance calls for human oversight, continuous verification, and safeguards against deepfakes, as outlined by UNESCO and AI safety experts [S55][S56][S57].
Overall Assessment

The discussion shows strong convergence on the necessity of multilingual AI as a democratic and human‑rights issue, the importance of open, collaborative governance structures, and the need for dedicated funding, talent development, and community‑centric data practices. Speakers from government, academia, and the private sector largely agree on these themes, with only minor variations in emphasis.

High consensus across most thematic areas, indicating a shared commitment to building inclusive, open, and responsibly governed multilingual AI ecosystems. This consensus supports coordinated policy actions, joint funding programmes, and capacity‑building initiatives to advance the agenda.

Differences
Different Viewpoints
Data collection methodology – community‑led ethical collection vs reliance on large‑scale data scraped by big‑tech and benchmark‑driven approaches
Speakers: Aya Bedir, Alex Ilic
Public‑private partnership (Current AI) must avoid big‑tech data scraping; prioritize community‑led language and culture preservation Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks
Aya warns that big-tech often scrapes data without licensing, treating communities as data rather than people, and calls for community-driven preservation [158-164]. Alex acknowledges that most benchmarks are created by large companies and that only a few experts can build models, urging more academic access to compute, data and culturally appropriate benchmarks [191-193][183-186][210-218]. The two positions clash over whether the primary solution is to curb big-tech data practices and empower communities, or to focus on building academic capacity within the existing data-centric ecosystem.
POLICY CONTEXT (KNOWLEDGE BASE)
Ethical frameworks and community data cooperatives emphasize sovereignty and responsible collection, contrasting with big-tech scraping practices [S39][S42][S45].
Resource strategy for multilingual models – high‑compute, large‑scale expansion versus frugal, low‑resource, sovereignty‑focused development
Speakers: Alex Ilic, Participant
Apertus is an open, multilingual foundation model; collaboration with other projects expands language coverage C‑Line model for Southeast Asian languages, built partly on Apertus, emphasizes frugal data use and regional sovereignty
Alex describes a strategy of scaling language coverage by increasing compute, data and costs to bring many languages up to English-level performance, noting the need for substantial investment [196-199][200-202]. The Participant stresses a frugal approach that achieves effective models with limited data while preserving regional sovereignty and cultural autonomy [262-268][263-268]. This creates a disagreement on whether multilingual AI should be pursued through resource-intensive scaling or through low-resource, sovereignty-centered methods.
POLICY CONTEXT (KNOWLEDGE BASE)
The “open sovereignty” concept and compute-divide initiatives propose frugal, locally controlled AI development as an alternative to resource-intensive models [S44][S48][S50][S57].
Evaluation of AI systems – benchmark‑centric validation versus real‑world, high‑stakes validation and continuous feedback
Speakers: Alex Ilic, Annie Hartley
Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks Medical example shows AI can give dangerous advice if not validated; LIGHTS and MOVE projects collect real‑world feedback to improve models
Alex emphasizes the importance of benchmarks, noting that current benchmarks are dominated by big companies and that improving them is essential for model development [191-193][210-218]. Annie provides a concrete case where an unvalidated model gave unsafe medical advice, arguing for real-world validation through projects like LIGHTS and MOVE that gather feedback from clinicians in high-stakes settings [287-295][299-330]. The disagreement lies in whether progress should be measured primarily through benchmark performance or through field-based validation and iterative improvement.
POLICY CONTEXT (KNOWLEDGE BASE)
Governance literature stresses moving beyond benchmarks toward continuous, real-world monitoring for safety-critical applications [S55][S56][S52].
Unexpected Differences
Community‑led data collection vs acceptance of big‑tech dominated benchmarks
Speakers: Aya Bedir, Alex Ilic
Public‑private partnership (Current AI) must avoid big‑tech data scraping; prioritize community‑led language and culture preservation Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks
Aya explicitly condemns big-tech data-scraping practices as unethical and calls for community-driven preservation [158-164], whereas Alex, while noting the dominance of big-company benchmarks, does not challenge the underlying data practices and instead focuses on building academic capacity within that existing ecosystem [191-193][210-218]. The contrast between a strong ethical stance against big-tech methods and a more pragmatic acceptance of the status quo was not anticipated given their shared involvement in the ICAIN network.
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for community ownership of data and benchmarks appear in discussions on data cooperatives and open-source standards [S42][S53][S54].
Sovereignty and code‑switching emphasis versus generic multilingual AI framing
Speakers: Participant, Markus Reubi
Sovereignty of societies and individuals requires nuanced handling of code‑switching and cultural contexts AI must serve all languages and cultures to ensure democratic participation
The Participant stresses that AI systems must respect individual and societal sovereignty, accounting for code-switching and cultural nuance [265-272][277-281], while Markus presents multilingual AI primarily as a democratic tool without addressing these deeper sovereignty and linguistic fluidity issues [4-5]. The divergence between a detailed sovereignty perspective and a broader democratic framing was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Research on code-switching and cultural preservation highlights the need for AI that respects linguistic sovereignty, contrasting with one-size-fits-all multilingual models [S43][S44][S46].
Overall Assessment

The discussion revealed broad consensus on the importance of multilingual AI and the need for talent, compute and funding. However, substantive disagreements emerged around data collection ethics, resource‑intensive versus frugal model scaling, and evaluation methods (benchmark‑centric versus real‑world validation). These divergences reflect differing priorities between community‑centric ethics, academic capacity building, and large‑scale industrial approaches.

Moderate to high – while participants share overarching goals (inclusive, multilingual AI), they diverge on the pathways to achieve them, which could affect coordination of future initiatives, funding allocations, and governance frameworks.

Partial Agreements
Both speakers agree that multilingual AI is essential for inclusive societies. Markus frames it as a democratic imperative, stating that linguistic exclusion hinders digital participation [4-5], while Petri frames language access as a human right based on UN experience [226-229]. They differ on the normative framing (democracy vs human rights) but share the same goal of inclusive AI.
Speakers: Markus Reubi, Petri Myllymäki
AI must serve all languages and cultures to ensure democratic participation Language access is a fundamental human right; inclusive initiatives are essential
Both recognize the need for skilled talent and infrastructure to build multilingual models. Markus points to the need for supercomputing expertise and talent in HPC [220-221], while Alex highlights the broader shortage of experts and calls for academic empowerment with compute and benchmarks [183-186][210-218]. They agree on the necessity of talent but differ on the focus (high‑performance computing infrastructure vs broader academic capacity building).
Speakers: Markus Reubi, Alex Ilic
AI must serve all languages and cultures to ensure democratic participation Severe talent shortage for building foundation models; academia needs empowerment through compute resources and culturally relevant benchmarks
Both speakers support strengthening Indo‑Swiss research collaboration. Torsten announces three new calls across specific domains and a longer‑term framework [27-34][41-44], while Markus mentions the upcoming launch of three joint calls under the JRP [17-18]. They share the goal of deepening bilateral research but differ in the level of detail and emphasis (Torsten highlights thematic breadth, Markus focuses on the procedural announcement).
Speakers: Torsten Schwede, Markus Reubi
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework We will begin with the announcement of the launch of the three new joint calls under the lead of the Indo‑Swiss Joint Research Programme, JRP
Takeaways
Key takeaways
Multilingual AI is framed as a democratic imperative and a human right; AI must serve all languages and cultures to ensure inclusive digital participation. Indo‑Swiss research collaboration is being deepened with three new joint research calls (geosciences, social sciences, One Health) and the launch of an Indo‑Swiss Research Framework Programme. Concrete multilingual AI projects were highlighted: Bhashini (India) delivering speech, text, OCR and dictionary services in 22+ languages; Apertus (Swiss) as an open multilingual foundation model; C‑Line (Singapore) targeting Southeast Asian languages with frugal data use. Governance emphasis on public‑private partnerships (Current AI) that avoid big‑tech data‑scraping, prioritize community‑led language and cultural preservation, and respect data licensing and consent. ICANN/ICAIN network provides a collaborative, inclusive governance platform linking academia, policy makers, and industry across regions. Severe talent shortage for building foundation models; academia needs empowerment through compute resources, culturally relevant benchmarks, and talent development. Real‑world validation is critical; examples from medical contexts show dangerous errors when models are not validated. Projects like LIGHTS and MOVE aim to collect high‑stakes feedback to improve models.
Resolutions and action items
Official launch of three Indo‑Swiss joint research calls (geosciences, social sciences, One Health) and the establishment of the Indo‑Swiss Research Framework Programme. Introduction of new funding schemes: Explore, Experiment, and Expand grants; increased mobility funding for consortium members; and plans for frequent flagship events in Switzerland and India. Commitment to expand language coverage in Apertus and to collaborate with Bhashini for broader multilingual capability. Announcement of a joint demo device (Current AI & Bhashini) to be presented at 15:30, showcasing community‑centric language technology. Agreement to pursue frugal data approaches and regional sovereignty considerations in model development (e.g., C‑Line project). Implementation of the MOVE (Massive Open Online Validation and Evaluation) project to gather real‑world feedback from high‑stakes domains such as healthcare. ICANN to continue facilitating inclusive governance, ensuring all countries are invited as participants rather than merely data sources.
Unresolved issues
Scalable, ethical methods for community‑led data collection that respect licensing and consent while meeting the data needs of large models. Definition of appropriate multilingual performance metrics and benchmarks beyond English‑centric standards. Sustainable funding models for high‑stakes validation in low‑resource or conflict‑affected settings. Operational details on how to balance national sovereignty with multinational AI governance, especially regarding code‑switching and dialect handling. Exact cost estimates and logistical plans for expanding Bhashini from 22 to 36+ languages and for digitizing unwritten tribal languages. Concrete strategies to address the global talent shortage and to build capacity in regions currently lacking expertise.
Suggested compromises
Adopt a public‑private partnership (Current AI) that combines philanthropic, governmental, and industry resources while keeping data collection community‑driven. Utilize frugal data techniques (as demonstrated by C‑Line) to achieve multilingual performance without massive data scraping. Implement inclusive governance through ICANN/ICAIN, treating all nations as guests at the table rather than merely sources of data. Release open, transparent models like Apertus that can be extended by any community, allowing shared ownership and reducing reliance on proprietary systems.
Thought Provoking Comments
AI can only serve the public good if it serves all languages and all cultures… linguistic exclusion remains one of the most persistent barriers to digital participation, ensuring multilingual access is therefore not only a technical challenge, it’s a democratic imperative.
Frames multilingual AI not just as a technical issue but as a fundamental democratic right, setting a normative baseline for the entire discussion.
Established the central theme of the summit, prompting subsequent speakers to justify their projects in terms of democratic inclusion and influencing the agenda toward language equity.
Speaker: Markus Reubi
We had about 200‑odd people go into the field, collect speech, text, pictures… create monolingual and bilingual corpora because the non‑availability of digital data was the biggest challenge. We then built a voice‑first interface for farmers in their own language.
Provides a concrete, ground‑up methodology for overcoming data scarcity, illustrating how large‑scale multilingual AI can be built through community engagement rather than top‑down data scraping.
Shifted the conversation from policy to practical implementation, inspiring later remarks about community‑driven data collection and validating the feasibility of multilingual systems.
Speaker: Amitabh Nag
Big tech often uses a brute‑force methodology—scraping data, treating individuals and communities as data. We must get as close as possible to the communities themselves and support them to preserve their own cultures and languages.
Critiques the dominant corporate approach to multilingual AI, highlighting ethical concerns about consent, cultural respect, and the risk of a condescending ‘data‑for‑them’ model.
Created a turning point toward ethical reflection; subsequent speakers (e.g., Alex Ilic, Annie Hartley) referenced the need for community‑led validation and neutral, open‑science approaches.
Speaker: Aya Bedir
It’s not just infrastructure; the real bottleneck is talent. Only about 100 people worldwide can build foundation models. Benchmarks are dominated by big companies and reflect their metrics, not cultural needs.
Identifies talent scarcity and biased benchmarking as systemic obstacles, urging a re‑orientation of research incentives toward academia and culturally relevant metrics.
Deepened the technical discussion, prompting acknowledgment of the need for new benchmarks and collaborative compute resources, which later tied into the MOVE project mentioned by Annie.
Speaker: Alex Ilic
Access to language and culture is a human right… we need to make sure all countries are invited to the dinner table, not just part of the menu.
Elevates multilingual AI to a human‑rights issue and uses a vivid metaphor that underscores inclusive participation versus tokenism.
Reinforced the democratic framing introduced by Markus and broadened the moral scope, influencing later remarks about sovereignty and equitable access.
Speaker: Petri Myllymäki
Sovereignty means power for societies and individuals. We must account for code‑switching, dialects, and the fact that language use is fluid. Frugality and small‑data approaches are essential for many communities.
Links linguistic diversity to political sovereignty and practical constraints, introducing the idea that AI systems must handle fluid multilingual realities and limited resources.
Shifted the tone toward geopolitical considerations and resource‑efficient AI, setting the stage for discussions on scalable yet frugal model training.
Speaker: Participant from Singapore (Dean of Humanities, Arts and Social Sciences)
When we asked a model in a low‑resource language how to treat diabetic ketoacidosis, it answered ‘thou shalt not eat insulin on a Tuesday.’ This shows that models trained on generic data (e.g., the Bible) can be dangerously inaccurate in high‑stakes medical contexts. We need real‑world validation (MOVE) and neutral, open‑science pipelines.
Provides a stark, concrete example of the risks of multilingual models in critical domains, emphasizing the necessity of validation, neutrality, and community‑driven evaluation.
Served as a powerful turning point that moved the discussion from abstract policy to tangible risk, prompting agreement on the need for validation frameworks and influencing the concluding emphasis on impact‑driven research.
Speaker: Annie Hartley
Overall Assessment

The discussion was anchored by Markus Reubi’s framing of multilingual AI as a democratic imperative, which set a normative agenda. Ground‑level insights from Amitabh Nag demonstrated feasible data‑collection strategies, while Aya Bedir’s ethical critique of big‑tech practices and Alex Ilic’s focus on talent and biased benchmarks introduced critical reflections on how the field is currently structured. Petri Myllymäki’s human‑rights metaphor and the Singapore dean’s sovereignty‑focused remarks expanded the conversation to global equity and resource constraints. Finally, Annie Hartley’s vivid medical failure story crystallized the real‑world stakes, compelling participants to acknowledge the urgency of validation and neutral, community‑centric approaches. Collectively, these comments redirected the dialogue from high‑level announcements to concrete challenges, ethical considerations, and actionable research pathways, shaping a more nuanced, interdisciplinary, and action‑oriented outcome.

Follow-up Questions
Could you share your key focus interests and why you focus on hardware?
Understanding the hardware emphasis of Current AI clarifies how the initiative plans to scale multilingual AI infrastructure and support open‑source development.
Speaker: Aya Bedir
Please describe the work of Bhashini (Bajini) and its future direction.
Provides insight into India’s large‑scale multilingual AI project, its data collection methods, and upcoming language expansions, which are critical for collaborative research.
Speaker: Amitabh Nag
Could you share experiences with Apertus and discuss Swiss AI and potential inclusion of Indian languages?
Helps assess the current state of the open multilingual model, identify gaps, and explore how Indian languages can be integrated in future releases.
Speaker: Alex Ilic
Why did the Nordics recommend multilingual language model initiatives to the world?
Explores the Nordic perspective on language as a human right and the rationale behind advocating for global multilingual AI efforts.
Speaker: Petri Myllymäki
Could you discuss the importance of sovereignty and language in AI?
Addresses how national and individual sovereignty intersect with multilingual AI development, informing governance and policy frameworks.
Speaker: Participant from Singapore (unnamed)
Could you share how multilingual AI impacts high‑stakes medical use cases and your role in ICANN?
Highlights real‑world medical implications of multilingual models, the need for validation in critical settings, and the role of neutral institutions in overseeing such deployments.
Speaker: Annie Hartley
Area: Developing scalable, ethically sourced digital data for low‑resource languages
Data scarcity hampers model performance; ethical, community‑driven data collection is essential to build accurate multilingual AI without exploiting communities.
Speaker: Amitabh Nag, Aya Bedir
Area: Creating cost‑effective methods to improve performance of multilingual models beyond English
Current models lag in non‑English languages; identifying affordable strategies is vital for equitable AI capabilities.
Speaker: Alex Ilic
Area: Building talent and capacity in academia to train foundation models
There is a severe shortage of experts able to build foundation models; expanding academic expertise is crucial for sustainable multilingual AI development.
Speaker: Alex Ilic, Markus Reubi
Area: Designing culturally relevant benchmarks and evaluation metrics for multilingual AI
Existing benchmarks favor large companies and English; new metrics reflecting cultural contexts are needed to guide model improvement.
Speaker: Alex Ilic
Area: Validating AI models in high‑stakes medical contexts across diverse languages
Ensures safety and reliability of AI recommendations in critical healthcare scenarios, especially for under‑represented languages.
Speaker: Annie Hartley
Area: Exploring frugal AI approaches for resource‑constrained environments
Developing AI that operates effectively with limited compute and data supports inclusion of low‑resource regions.
Speaker: Aya Bedir, Participant from Singapore
Area: Incorporating code‑switching and dialectal variation into multilingual models
Real‑world language use involves mixing languages and dialects; models must capture this to be truly inclusive.
Speaker: Participant from Singapore
Area: Mechanisms to include currently excluded 119 UN member states in multilingual AI initiatives
A large number of countries are not part of existing efforts; strategies are needed to broaden participation and avoid a digital divide.
Speaker: Petri Myllymäki
Area: Assessing the cost and resource requirements to raise language performance to parity with English
Understanding financial and compute needs will help plan funding and collaborative efforts for language equity.
Speaker: Alex Ilic
Area: Establishing governance frameworks for AI sovereignty and community control over tools
Addresses power dynamics and ensures that communities retain control over how AI models using their language and cultural data are deployed.
Speaker: Aya Bedir, Annie Hartley

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How AI Is Transforming Indias Workforce for Global Competitivene

How AI Is Transforming Indias Workforce for Global Competitivene

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel explored how artificial intelligence is reshaping the workforce, noting both the opportunities it creates and the anxiety it generates among young professionals in India [8-12]. Organisers divided the discussion into three parts, beginning with an examination of the nature of the disruption and how companies are designing responses [14-16].


Panelists agreed that software engineering is experiencing the most rapid transformation, overtaking testing and infrastructure management as the primary area of impact [34-37]. They emphasized that the real value of AI lies in solving previously intractable problems, which creates vast opportunities for technically savvy graduates who acquire new competencies such as system-level judgment, interdisciplinary fluency, continuous learning, and deep contextual awareness [45][57-64][65-70][71]. Consequently, young talent must adopt a different skill set to realise these opportunities [45].


Representatives from the UK described a coordinated national effort that includes the AI Skills Partnership, training more than one million people, and initiatives to convert non-AI graduates into AI-ready professionals [90-98][100-107]. They highlighted the need for interdisciplinary and “human” skills, role redesign that reduces team sizes while accelerating delivery, and the TechSkills programme that aligns university curricula with employer requirements [109-115][112-116]. While anxiety about displacement exists, the UK strategy aims to turn it into agency through reskilling, upskilling and lifelong learning pathways [98-104][105-107].


Across both regions, speakers cautioned that AI adoption will be incremental, requiring organizational redesign and careful governance rather than a simple replacement of coders [238-246][274-283]. They warned of concentration risks, exclusion of under-served groups, and over-automation without human oversight, calling for inclusive-by-design policies, interoperable skill taxonomies, and broader access to data and compute infrastructure [324-332][334-342][354-359][365-373]. The discussion concluded that coordinated, inclusive, and continuously adaptive approaches are essential to harness AI’s potential while mitigating its workforce risks.


Keypoints


Major discussion points


AI is reshaping the workforce and creating both disruption and opportunity.


The panel opened by framing AI as a source of “opportunity” and “anxiety” for youth and asked how disruption is being designed or imposed [8-12]. Srikrishna highlighted the rapid shift in which software engineering is now the most impacted function, overtaking testing and infra-management [33-37]. He also stressed that the “real value of AI is … solving problems you could not solve before” and that opportunities for technically-savvy young professionals are “enormous” if they acquire the right skills [45-46].


New skill sets are required: system-level judgment, interdisciplinary fluency, continuous learning, and contextual awareness.


Ravi listed four core capabilities: system-level judgment of AI outputs, interdisciplinary fluency across engineering, regulation and risk, a continuous-learning mindset, and deep contextual awareness for India’s multilingual environment [57-71]. Sue echoed this, noting the need for “human skills” such as client-facing and governance abilities, and described the UK’s AI Skills Partnership that aims to train over a million people [91-99][101-107].


Role redesign and reskilling are essential; coding will become a low-cost, highly-automated activity.


Srikrishna described how AI is compressing software squads from 7-10 members to as few as three, forcing a redesign of roles and a shift from pure coding to higher-level problem solving [238-242]. Ravi explained Mastercard’s governance model that embeds AI risk management across product, engineering, legal and compliance, and stresses that “governing AI at scale … is fundamentally a workforce challenge” [191-202]. Sue added that even if AI writes code, humans will still be needed to audit it, creating new governance roles [169-177].


Policy, ecosystem coordination, and inclusive frameworks are needed to avoid concentration and exclusion.


The UK is pursuing an “AI Skills Partnership” and a “TechSkills Gold Accreditation” to align industry, academia and government, while stressing an iterative, flexible approach [95-107][109-116]. Sangeeta contrasted this with India’s “disaggregated” efforts across multiple state and industry bodies, calling for a whole-of-government strategy [108-109]. Ravi warned of “concentration risk” if only a few institutions control data and compute, and urged inclusive AI literacy from schools onward [324-340][345-347]. Srikrishna concluded that inclusiveness must be “by design” and that academia should make AI resources freely available [365-373].


Education and curriculum must evolve to embed AI governance, interdisciplinary thinking, and lifelong learning.


Ravi argued that AI education should extend beyond computer-science majors, integrating governance, risk and policy into curricula and focusing on role redesign rather than only reskilling [214-226]. Sue highlighted the need for a national taxonomy and interoperable skill credentials so learning is portable across sectors [354-359]. Sangeeta stressed that schools and colleges must teach critical-thinking, problem-solving and provide access to data and compute resources [72-74][73-74].


Overall purpose / goal


The panel’s purpose was to explore how artificial intelligence is transforming the workforce, identify the emerging skill gaps, discuss how organizations, governments and educational institutions can redesign roles, curricula and policy frameworks, and share concrete examples from India, the UK and the private sector to guide a coordinated, inclusive transition.


Overall tone


The conversation began with an exploratory and diagnostic tone, acknowledging both excitement and anxiety about AI’s impact. As the discussion progressed, the tone became prescriptive and solution-focused, offering specific skill frameworks, governance models, and policy initiatives. Throughout, the tone remained collaborative and forward-looking, with moments of optimism about opportunities tempered by caution regarding risks of exclusion and concentration.


Speakers

Ravi Aurora – Role/Title: Mastercard representative (senior executive) – Areas of expertise: AI in fintech, risk management, AI governance, digital transformation. [S1]


Sue Daley OBE – Role/Title: Director, Tech and Innovation, Tech UK – Areas of expertise: AI policy, workforce upskilling, technology innovation, AI governance. [S5]


Srikrishna Ramakarthikeyan – Role/Title: Senior executive, IT services sector – Areas of expertise: AI impact on software engineering, workforce displacement, upskilling, AI adoption. [S6]


Sangeeta Gupta – Role/Title: Panel moderator – Areas of expertise: AI workforce transformation, panel facilitation.


Additional speakers:


Vishnu R. Dusar – Role/Title: President, Global Public Policy and Government Affairs, Mastercard – Areas of expertise: Public policy, government affairs, fintech.


Sue Daly – Role/Title: Director, Tech and Innovation, Tech UK – Areas of expertise: Technology innovation, industry-government collaboration.


Full session reportComprehensive analysis and detailed insights

The panel opened with Sangeeta Gupta stating its focus was AI and workforce transformation, noting both “opportunity” and “anxiety” for young people in India and the need to “decode what AI really means and how we navigate these shifts” [8-12]. She outlined a three-segment structure, the first examining the nature of the disruption and how organisations are designing for it [14-16].


Disruption focus: Srikrishna Ramakarthikeyan (IT services) observed that while earlier in the year testing seemed the most vulnerable, the rapid evolution of AI has made software engineering the function experiencing the greatest impact, with infrastructure changes plateauing [33-43]. He warned that AI adoption will likely remain in the low single-digit-percent range per year because organisations must embed context and governance, and AI does not yet “understand context” [238-246][274-283].


Opportunities for graduates: Ravi Aurora (Mastercard) highlighted AI’s role in decision-making and governance, emphasizing the need for new skill sets rather than asserting that AI will not eliminate jobs [57-71]. He argued that AI enables problem-solving that was previously impossible, creating “enormous” opportunities for technically-savvy young professionals [45-46].


Skill framework: Ravi enumerated four critical capabilities for an AI-augmented workforce: (i) system-level judgment to detect model drift and assess high-stakes outputs; (ii) interdisciplinary fluency across engineering, regulation, risk and user behaviour; (iii) a continuous-learning mindset to keep pace with evolving models; (iv) deep contextual awareness for India’s multilingual environment [57-71]. Sue Daley (Tech UK) added that “human skills” – client-facing, governance and ethical judgment – will become the differentiators as routine tasks are automated [91-104].


Role redesign vs. reskilling: Srikrishna described how AI can shrink a typical squad from 7-10 members to three and cut delivery cycles from two weeks to two days, making role redesign essential[238-245]. He also noted that voluntary AI-tool training has driven higher engagement than mandatory programs[255-268]. Ravi stressed that organisations should redesign roles and embed AI governance early in product design, rather than focusing solely on reskilling [191-202][222-226].


Governance structures: Mastercard has instituted a formal AI governance framework with a Chief AI & Data Governance Officer, a Chief Privacy Officer, and a “privacy-by-design” approach that spans data, science, product, legal and compliance teams [191-202]. Sue pointed out that even when AI writes code, human auditors will still be needed[169-178].


UK national approach: The AI Skills Partnership aims to train over one million people and offers one-year conversion courses for non-AI graduates [95-107]. TechUK’s TechSkills Gold Accreditation aligns university curricula with employer needs [112-116]. Sue also highlighted AI adoption already occurring in finance, healthcare, transport and logistics, and described infrastructure initiatives such as the National Data Library and AI Growth Zones[120-130].


India’s fragmented landscape: Sangeeta noted the lack of a “whole-of-government” approach, with multiple state bodies and industry groups operating in silos [108-110]. Sue agreed that there is no silver-bullet solution and advocated for an iterative, flexible strategy[109-111].


Risks and inclusiveness: Ravi warned of a concentration risk if only a few institutions control data, compute and talent, urging equitable access and hands-on exposure across geographies[324-332][337-342]. Both Ravi and Srikrishna called for inclusiveness by design, with Srikrishna urging academia to make AI resources as freely available as the internet [365-373]. Sue emphasized turning anxiety into agency through inclusive reskilling pathways[98-104][345-347].


Education recommendations: Ravi suggested curricula should embed the full AI lifecycle, integrating risk, policy and product thinking, and focus on role redesign rather than pure reskilling [214-226]. Sue proposed a national skills taxonomy and interoperability of credentials to ensure portability of learning [354-359], and called for early AI literacy in schools[354-359].


Proposed actions: Expand the AI Skills Partnership and conversion courses; promote TechSkills Gold accreditation; operationalise Mastercard’s AI governance framework across industries; adopt voluntary AI-tool training; develop a national AI-skills taxonomy; invest in data libraries, AI Growth Zones and compute infrastructure; pilot role-redesign projects that shrink squads and accelerate delivery; create interdisciplinary curriculum modules that embed governance, risk and policy alongside technical training.


Unresolved issues: Defining precise timelines for large-scale adoption; ensuring junior-level contextual learning when entry-level roles are automated; establishing metrics to monitor the effectiveness of governance frameworks and skill-taxonomy implementation [238-246][274-283][354-359][324-332][365-373].


Overall, the panel demonstrated high consensus on the need for interdisciplinary skills, role redesign, inclusive policy and coordinated upskilling, while a moderate disagreement persisted regarding the extent to which human coders will remain necessary. The discussion moved from a diagnostic overview of AI disruption to a concrete roadmap for workforce transformation, underscoring that successful AI integration hinges on education, governance, and inclusive, adaptable policy rather than technology alone.


Session transcriptComplete transcript of the session
Speaker

President, Global Public Policy and Government Affairs Mastercard, Vishnu R. Dusar, Co -Founder and MD, Nucleus Software, Sue Daly, Director, Tech and Innovation, Tech UK.

Sangeeta Gupta

Thank you so much, Pragya, and a very good morning to my wonderful panelists. We have a few audience in the room, but we have a lot more online. So I’m looking forward to, you know, yeah, we can get out. You are here, Ravi, next to me. And Vishnu is just on his way. He should be here shortly. I think the theme of our panel is AI and workforce transformation. And clearly, from a, you know, India perspective, the AI is obviously creating a number of opportunities. It’s also creating a lot of anxiety amongst the youth. And I think it’s important. It’s important to decode what does AI really mean and how do we navigate these shifts that are ahead of us.

So in terms of structuring the panel, I thought we’ll try and break it into. three different segments. The first segment is clearly about what is the disruption and how are we designing for it? So try and get perspectives from each of the panelists on how are you seeing this disruption? Are we shaping this disruption or is this disruption really shaping us? So Kish, if I can start with you maybe, right? From one of the sectors that’s most hotly debated is IT services and you’re a leading company in that space. How are you seeing this change for your employees? Do you see software coding now only being done through AI tools? So what is the job of the coder if you look at it?

But how real is this disruption and how are you staying ahead of the shifts that are there?

Srikrishna Ramakarthikeyan

So I think the direction of travel is indisputable. That there is disruption. There’s a lot of there’s an issue of technology capability and there’s an issue of adoption. And there’s always that technology capability leads adoption. Adoption is going to impact, is going to determine workforce displacement or disruption. But the capability, there’s no doubt that this capability that exists today, actually this capability that existed three months ago, six months ago, where there’s quite a large chunk of work that is done by the industry that could potentially be displaced or improved or in some way impacted by AI. What is it that is getting impacted is changing very rapidly. So you would ask me at the beginning of 24, right?

What services will get impacted? What services will get most impact? Out of say testing, actually I’ll put BP of India. I am saying in tech I would put testing first and I would have put software engineering last. Today I will flip that. I will say software engineering is the most. So the direction of travel I think…

Sangeeta Gupta

So you really think software engineering is bigger disruption than testing and infra management or other stuff, right?

Srikrishna Ramakarthikeyan

That is true. So I think whatever disruption we saw I thought would be there in infra. I think it is there but it is a plateau. I am not seeing leaps and bounds of change. What we saw as a potential change like a year ago and now is not so different. I think the massive difference is in software engineering.

Sangeeta Gupta

So you know if you are a young software professional… How do you see… What does this mean for me as that young fresher out of college right now?

Srikrishna Ramakarthikeyan

I’ll say opportunities for a young technically savvy person is enormous now there are things they need to think of and do differently for that opportunity to become real for them because the real value of AI is not in reducing headcount in blah functions whatever it is where it’s in BPO or some functional work that’s not the real value the real value is in being able to solve problems that you could not solve before and I think you need to arm yourself with a completely different set of skills to make that real but if you do that I think the opportunities are enormous for a young age

Sangeeta Gupta

Thanks Kish, I’ll come back to you Ravi if I can come to you, MasterCard is very strong obviously in financial services but you have a very strong data and technology play how are you seeing this workforce disruption and for a company like yours which has a very large GCC in India what are the different kind of skill sets that you’re thinking about today

Ravi Aurora

Sure, thank you very much and thanks to NASCOM great to be here on the panel with Sue and Sri Krishna so I think like I mean a lot of change right over the last two decades when I look at our industry I guess if you look at it like all the professionals in privacy, cyber security data protection, technology risk they’ve all been enablers of digital transformation right? They have, I mean, create what we enjoy today in terms of digital empowerment and the ability, let’s say, talking from a payment lens, you know, very seamless in terms of wherever in the world you are, right? All that is riding on trust, right? And there’s a lot that goes in, you know, to build that trust, right?

So now we are seeing as artificial intelligence, AI is being embedded into a kind of decision -making, public infrastructure, service delivery, right, and governance. So it’s no longer kind of a downstream compliance function as such. So I think that’s why we need, you know, the shift is in kind of the fintech disruption that came about before. I think what we are… We are seeing a bigger shift that AI is bringing in terms of the kind of skill sets, you know, that are required. So, you know, to your question on what kind of skills are required, right? I think the skills I would say is that the, what do you call, the capability for system level judgment is needed.

So what we mean by that is that are you able to, you know, take what outputs are coming? You know, from AI. And you need to have the capability to understand is the model drifting, you know, in high stakes and regulated industry like ours. It becomes essential because decisions scale very instantly and as do the systemic errors, right? And the impact of those errors if left unchecked. So I think that it’s important to have that system. level judgment. Then, interdisciplinary fluency is important because the AI challenges are not just technical, right? They are at the intersection of engineering, of regulation, of risk, you know, user behavior. So, if we have professionals who are across those domains, right, that’s important and to have that interdisciplinary approach rather than working in silos as such.

Then, it comes to need for a very continuous learning mindset because the AI systems are evolving with data, right? And the workforce needs to evolve that too. And the ability to learn from live environments, right? What’s happening to adapt models, kind of, to be able to refine the decision -making. So, that’s important. So, system -level judgment, interdisciplinary fluency. continuous learning mindset and I think last but not the least is a deep contextual awareness is needed now in a country like ours in India you know multiple languages dialects informal systems so if an AI agent is interacting with the user the question is does it understand the context and the intent and the kind of the real -life realities or is it just a language right so because the context is shaped by the whole models are being trained which means that engineers have to consciously design for it so that contextual ability and awareness is very important

Sangeeta Gupta

so the typical engineer who was the coder as we know knew it obviously has to build a very differentiated set of skills is what you’re really talking about right so understanding interdisciplinary learning understanding context the ability to continuously learn I think that in itself is becoming a skills. So clearly, I think there’s a lot of change that is needed at a college level and school level on how, you know, even how you’re learning so that you are ready for this very, very changing world. So if I can come to you, right, how are you, you know, you represent Tech UK here, how are you seeing the AI disruption in the UK workforce? Is there anxiety?

Is there opportunities that you are seeing? And how are you as an organization, and of course, the UK government supporting this transition that’s

Sue Daley OBE

Well, thank you. That’s a question in a panel all in itself. It’s a real pleasure to be here. Thank you so much for the invitation to be part of the summit. Just to say to everybody, you’ve done an amazing job. So thank you. But also to this really important panel discussion. And it is absolutely a discussion that we’re having in the UK. And what I found really useful this week, if I can be slightly selfish for a moment, is that I think it’s a really important discussion that we’re be having in the UK. And I think it’s a really important discussion that we’re going to be having in the UK. And I think it’s a really important discussion that we’re going to be is to listen to the conversations that you guys are having here and the other global people that are here at the summit and to kind of compare notes.

Are we having the same conversations? Are we facing the same kind of issues? I think what I’ve just heard from my fellow panellists are some of the conversations that are happening in the UK. Yes, there is change. Yes, there is disruption happening. And to your point, absolutely, what we’re seeing is a lot of roles, not just in our industry and sector, but across industries and sectors, moving from very much admin tasks, very much cognitive tasks. Those are being increasingly automated. But then that’s freeing people up to do more problem -solving and to look at more client advisory governance and using and being able to shift those skills to look at AI governance. But also I would say client -facing as well, which goes to your point around skills.

I’ll come back to your broader question but yes it’s technical, yes it’s governance looking at other skills but it’s also those people skills, those human skills if we are shifting people, if jobs are shifting towards more of yes this automation can do the job but what’s the added value that I can provide and it’s my human skills which sounds very weird to say human skills, you know what I mean it’s that ability to interact, it’s that social, more social skills then are we teaching those as well as the technical as well as the legal, the governance as well as the software, as well as the technical skills, are we also teaching people and the young people coming through how to interact with people as well if they’re more client facing so absolutely the disruption we’re feeling it in the UK, we’re having that discussion in the UK, definitely in the industry is questioning what will my role be, where will I sit government is in the UK is focusing very squarely on this so as part of its AI opportunities action plan the UK government has created an AI skills partnership bringing together the government bodies that are looking at how do we upskill, how do we retrain, how do we get society ready for this next wave of AI that’s coming, not just the one we have now, but the one that’s coming down the line, and bringing together with companies and bodies such as TechUK and others to look at how do we do this in collaboration.

So how do we reach the wider population, and I’m not just thinking our industry here, but the wider society population with what are the training courses, what are the upskilling courses, what are the opportunities to learn and gain skills to thrive in an AI world, but then also how do we train our industry and sector for the shift that is happening as well. I think generally that task force is looking to train over one million people in AI so that we can help the greater population. be ready for working in this era. I think there is anxiety. I think there is concern. Some workers understandably worrying about displacement, worrying about if they’re at high exposure to automation, what does that mean?

How do they shift? How do they move? But I think what we are looking at is how do you turn and this is a word I’ve heard a lot about this week, how do you turn anxiety into agency? How do we encourage people to take a lead, lead, to take what they’ve learned but as you said, continuous learning, continuous upskilling because that is what you will need to thrive in this world. But I think what we’re looking at in terms of helping people do that is through restructured training, reskilling programs. It’s pathways for mid -career into new careers. One of the very interesting initiatives that the UK government introduced was around how people coming out of university that might not have an AI degree, can do a one -year conversion course to become then able to work in the AI industry.

so I think there are lots of, perhaps we’ll go into a little bit more, there are lots of different initiatives that the UK are doing which could be applicable here and vice versa, we want to learn from how you’re addressing this but I think there is anxiety but then how do you turn that into opportunity and agency

Sangeeta Gupta

and you know one of the issues in India we keep talking with the government is that we have a very disaggregated focus right now within India, there are multiple governments multiple state governments, organisations places, organisations like NASSCOM, we’re all trying to do some part of the pie but there is no if I can use that word, whole of government or whole of country approach right, I’m saying this is how if this is such a big disruption, this is how we will go about doing it, do you see that in the UK that there is an integrated approach and then obviously every actor has their own role to play in that

Sue Daley OBE

I think it’s coming first of all I don’t think there’s a silver bullet, I don’t think there’s one pure answer because the moment, as you said things are moving rapidly and quickly the moment you put in a task force or initiative, it may very quickly need to shift and need to change. So I think in all of these, and AI generally having an iterative, flexible approach that can adapt and shift as technology evolves and has new developments evolve is really, really key. So I think the AI skills partnership, which we’ve signed up to with the UK government, has really kind of become a bit of a cornerstone, a bit of a nucleus of how do we retrain, how do we upskill the general population.

But then I think there’s also the conversation about how are we ensuring our schools, our education curriculum, what young people are learning in schools, how is that joined up to the AI revolution? And I think while there’s some thinking there, I think that could be more joined up. And then, yes, of course, how are we training the industry? How are we getting people leaving, as you said, the freshers leaving universities with the skills that we need as industry? TechUK is part of part of TechUK is an organisation called TechSkills, go and check them out not right now but maybe afterwards and we at TechSkills work directly with employers directly with technology companies and universities so we be that bridge between the two to make sure that industry employers can provide input into the university, the courses what they’re teaching students so that when they come out of university they have a degree, it’s called a TechSkills Gold Accreditation Degree which means employers will recognise that degree and kind of go, yes you’ve got what I need, come and work for me so there’s no one single answer to this I think it’s a number of initiatives that need to work together but at TechUK as others we’re trying really really hard to join the dots but I think the TechSkills addresses the what do employees need from universities, how do we get universities and employers employers working more closer together what role can government do and what can government do that industry can’t and vice versa what can industry do that government can’t it’s really got to be a partnership and a collaboration but there’s no one I think single initiative that will in my view that will fix this or solve this or address this

Sangeeta Gupta

I think that’s probably a great way to think about it that there’s just so many changes that one single, there’s no single silver bullet like you said, you really got to figure out a way how you tie the different threads but let maybe a thousand flowers bloom because that’s the nature of what we’re dealing with right if you can bring it together and say here’s our coordinated approach I definitely think in the UK we could join up more these initiatives and maybe India with your scale can do that and you’ve definitely brought the world together in the summit so I’ve no doubt that you can definitely do that wonderful so Keesh if I can come back to you right again from an IT services perspective we’ve been always one of the largest employers for the engineering talent in this country now with the new skills that Ravi talked about do you see this as a way to focus will be largely on more elite top tier institutions and a large volume of students that were probably studying in tier 2, tier 3 colleges across the country and had a phenomenal career in our industry.

We are closing out opportunities for them.

Srikrishna Ramakarthikeyan

I want to make a point on a previous question and then I’ll address this. While, you know, and I agree there’s no silver bullet. However, I’ll say that, you know, I live in the US. The conversation I hear about policy around AI is should we regulate, should we not regulate? Who should regulate? Should it be the state? Should it be the central government? I’m not hearing what I heard here, which is a big focus on inclusiveness. And I think while, you know, it may not have all of the… I think while, you know, it may not have all of the… I think it’s still a very material difference in approach of how government I see here is thinking about.

And actually, I heard that from the UK. I did minister there before. I heard from President Macron yesterday in the plenary session. So I think there’s a big difference in some of countries relative to at least what I’m hearing in the U .S., much more focused on how to make it work for everyone. How to make it inclusive, which I think is a huge difference. I think it will lead to a very material difference in outcomes over a period of time. Now, coming back to your question. So first, do I know all the answers? No. But here are some things, some pieces that I think are true. First. I think we’re going to have to look at the data.

I think we’re going to have to look at the data. I think we’re going to have to look at the data. I think the I’ve seen young air native talent is much better at many things than think somebody even who’s in their 30s and trying to retrain them it’s much like you know do you use Instagram I don’t actually you know but there are kids who are grown up with it right so I think it’s the same difference the digital native I think you’re going to see an air native generation and we find actually like last year the there’s like a set of people we hire from the absolute top engineering schools like IT we had them train our management team on white coding in May last year because white coding back then was brand new and they were like and guess who the kids were the best in it in the company the people who came out of college right there in the best so we had them trainer so I think this part is going to be true right whenever we think of pyramid we have to bear in mind that sometimes the best talent is the youngest one that is coming the second one that’s going to choose I think ultimately the new opportunities cleared by AI go far outlaw far greater than the number of jobs this direct could reduce now there’s going to a period where you know there is a transition period and I’m not sure exactly you know how to clear but I’m very confident that ultimately AI is going to so many more things that will need building applications building tech for and I think power I think the third is also true that for kids the problem to solve is not tech is not coding.

It’s not, you know, creating data structures or whatever it is that kids are trying to solve. I think that’s a solved problem by some of the tech, by AI. So now you’ve got to think of what problems that you want to solve, which is something else, which is where the big

Sangeeta Gupta

So, Keech, I’m going to hold you to that where you said AI will create more jobs than it changes. So we’ll see how that plays out. But you know, one of the conversations I was having with another IT services company, and they were like this AI native talent is great, but that talent will have never learned to, you know, work without AI. And does that mean that some of your foundational and core skills will not be as solid as they were in the past because this is the world you’ve grown up with, and your dependence on these tools will be so high that does it lead to a lack of some foundational skills also, right?

Srikrishna Ramakarthikeyan

Listen, And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. There was a time when coding you had to do in C++, right? And then there were, the whole evolution of coding as an example has been abstracting what you need to code for to something, right? So you wouldn’t have IDEs like I don’t know how many years ago, right? But who codes without an IDE now? Nobody, right?

And that’s been true for whatever, a decade. So I think that same question will become who codes now? And I don’t think anybody will code, okay? That’ll be a solved problem. So no. Is it going to be a discipline? I think far from it. I think it’s going to become a significant advantage. I think the cost of coding is going to become zero. Cost of code is going to become zero. What that means is you can solve. any number of problems with code that you couldn’t solve for before because it is too complex or too expensive to do so. So, absolutely not. I think it’s going to be a big advantage.

Sue Daley OBE

Yeah, no, really fascinating. I think just on the coding point, you’re absolutely right. And I’m just thinking as a woman in tech as well, we had a big focus in the UK of getting girls into coding. Brilliant. But actually well now, why? But there is also an opportunity there but there’s also a risk. So coding, AI for coding, great. But we will need somebody to check the code. So again, it’s that shifting and that moving of skills. And then my brain went to okay, well the people that were doing the code could we reskill them into checking the code and going more into governance. But then my brain goes to, but hang on, but AI might be able to check the code quicker than a human can.

but then you get to that point of somebody then needs to check that the AI has checked the code correctly so there is, you know, you’re baking in governance and assurance in AI, humans will need to be in the loop, so how can people in the coding world be shifted in their role, shifted to help more on the governance side I did have another point, however my jet lag brain means I’ve forgotten it, so I’ll give away

Sangeeta Gupta

But if you’ve never coded in your life how do you know what to check for?

Sue Daley OBE

Oh, I remembered my point that’s kind of related in a way to the gentleman from Mastercard was saying about context and the completely important context is really, really key and something that is in my brain as well is that people that work in organisations over the last couple of years, they have, you know, done junior roles, they’ve learnt the company they’ve learnt the sector, they’ve learnt the industry they’ve kind of done the grunt work you know, to learn the context and learn what’s important and what’s important and what’s important and what’s important what concerns me slightly is that people coming in using AI will not using AI but when do we give them time to learn the company, when do we give them time to learn the context, are they getting exposed to, you know when I first started in a company, I started in the basement I worked my way up but I knew my sector, I knew my industry, I knew that background I knew that context, I knew what I was checking and why so if automation takes those junior roles away, how are we teaching people, how are people getting exposed to the context and what a fintech industry needs and what it looks like if those opportunities which came through more junior roles are now no longer there, so I think there’s huge opportunities here but there’s also some rethinking we need to do as an industry and a sector of are we skilling people with the right things for what the industry needs going forward as well

Sangeeta Gupta

Thank you so Ravi if you want to go both on the question on we have a million plus engineers graduating every year what are the jobs for them and you know obviously you’ve talked about the skills they need but will we as and you know today tech jobs are not just in the tech industry they are in every sector but what you see as the opportunity for them and secondly this whole part about right what will humans do if AI does all the coding sorry what would humans what would humans do or the engineer do if AI is going to do all the coding right so

Ravi Aurora

flows, how operational controls shape risk over time and when to intervene. Then I think we have to make governance interdisciplinary and influential which is requiring fluency for people and putting things together along law, technology, ethics, operations. Like I mentioned before, privacy, AI, governance, they cannot operate only in silos. So the future readiness requires a big structural change in design, in procurement design, deployment. And we also have to close the uneven digital capability across institutions. We talked about that. If there are central agencies and large enterprises can attract talent, and large can attract talent, then we have to while smaller cannot. So that will create governance gaps and governance gaps especially where AI is expanding the most.

And those are risks that we need to make sure that we have the right solutions or the right thought process because it is around going beyond kind of elite specialization towards more of a broad -based AI digital literacy. So at MasterCard, like, you know, what we do is I think that, you know, we have spent, you know, several of our last years operationalizing responsible AI, right? And not just as a policy exercise but as a workforce and capability challenge. Now, we have a very formal established AI governance framework. We have a chief AI and data governance officer. We have a chief privacy officer, you know. And we have a privacy by design approach into everything.

ensure that AI risks are addressed before systems are built and deployed and not afterwards. And we have an AI governance team that is working horizontally across data, science, product, legal, compliance, engineering because knowing how important that integration layer is because we have and then the product and engineering leaders, you could say they are the first line stewards of risk and AI risk. They are not kind of the recipients of compliance decisions. They are the stewards up front. But that happens when you get that right integration up front. So I think that for us from a MasterCard perspective what we have learned that governing AI at scale, it’s fundamentally a workforce challenge that requires interdisciplinary skill.

and early integration is required into product design and we need governance professionals who can manage risk and not just enforce rules. So it is a privacy by design, security by design. Those are kind of core principles, but then how do you bring those things together in this evolving is important.

Sangeeta Gupta

And I think that’s a fascinating part of this conversation, right? The whole focus on ethics, principles, trust, security, privacy by design, right? And as you think about, Ravi, going back to this large student workforce, right, that we are building for tomorrow, how do we get them to imbibe many of these principles? Obviously, when they come into your organizations, there’s structured programs that you’re running to drive this thinking. But if we had to take this back to the whole college -university ecosystem, any recommendations? Any recommendations you have on how to drive that?

Ravi Aurora

I think, no, absolutely. So clearly, you know, from a corporation perspective, I think, you know, I was looking at this morning when I took a picture of that. I think when I, you know, opened the news this morning, you know, the very first thing on the TV was around, you know, the headlines were AI skills or skill gap, right? And a lot of discussion based on, you know, obviously this week of what’s happening at the, you know, as part of this summit, right? So, and I think that, you know, clearly the role that business, academia, government, right, we all have a role to play in navigating this workforce transition. I think for corporations, it’s not just enough to say you’re offering internships, right, you know, to students.

I personally feel, you know, how are we taking our, maybe, you know, chief learning officer, or other, or engineering. kind of who are at the front line, how are they working with, you know, people in academia and actually helping think through and design courses based on real world examples of, and situations that are coming, you know, then, and certainly, obviously, when people come into internships, it helps them get that exposure, take that back into their learning environment. But I think in, you know, the whole facet of curriculum and curriculum design is changing, where it needs, it should not be only restricted only to computer science majors, but I think that this is something that is required, you know, in terms of AI in every different form across a broad set of disciplines, right?

So it’s not something that we can leave it only for, you know, computer science majors, you know, per se. So I think that the, you know, in terms of priority, embedding AI governance and interdisciplinary. interdisciplinary collaboration into, you know, is one of the very first layers that we have to begin with. So that, you know, the people coming in, you know, as you talked about the engineer, they’re trained to think across the full life cycle of AI system, you know, and not just in a very silent approach, right? And that is what talked about bringing engineers, product, risk, policy, all of those, you know, together. And, you know, then I think another priority is, I know we talk about it and we have to think, focus on role redesigning and not just reskilling.

And I think that, you know, because AI is transforming tasks within jobs rather than eliminating, you know, roles entirely. So I think that the work, you know, we have to see is how do we kind of redesign roles, right? Rather than only focusing on reskilling, right? And we have to build inclusive and distributed talent pipeline. So I think here, I mean, I go back to CII, you know, and other organizations where we have worked with where you go on the field and you’re working, let us say, with MSMEs, right? And, you know, working with the last mile, understanding their challenges and their pain points and bringing that into our product design and, you know, and output that’s required.

And because that provides the context. Right. So I think that, you know, the ability to take our talent pipeline and expose them to real world and helping them contextualize, you know, is very, very important.

Sangeeta Gupta

Thank you. Kish, if I can follow up the question with you, right? I think Ravi spoke about two themes. One is role redesign. So how are you seeing the role redesign happen from a technology services context? And secondly, I think there’s so much we hear about the changing role of the engineer. Now this whole forward deployed engineers becoming like the new buzzword in town. How are you seeing this happen in your organization?

Srikrishna Ramakarthikeyan

Thank you. I mean specifically on role redesign that is absolutely true I mean just again going back to software engineering you think of a typical kind of squad that builds software may have had 7, 8, 10 people some developers some testers scrum master typical roles I think in the extreme case we are seeing down to 3 people one product owner one developer one tester and that substantial redesign of the role and the time it takes to do it is coming down from 2 weeks to 2 days so yes you won’t see value unless you are redesigning the role you won’t see real value from AI now we have been speaking a lot about capabilities right I I I think we should spend enough time on adoption.

And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capabilities are, I think by the time it becomes real, adopted at scale into workforce, into our enterprise customers, it’s several years. In aggregate, ultimately, I think of the impact of work, and hence workforce, is maybe in low single -digit percentages per year at most. Even 1 to 2 percent right now. Maybe they’ll expand to 2 to 3 next year. This is because of the speed of adoption and the multiple constraints in adoption. Because I don’t think AI knows context. Right? Right. Everybody’s speaking in a watered -like… But, you know, mad could mean what the word mad means for one enterprise.

It could mean the old world for Chennai and another enterprise, right? So, there are many reasons why adoption, I think, is going to be slow. But, and frankly, one of the reasons is role redesign, because it is not as simple as getting a coding toy or whatever data tool. It is an organizational redesign to make that happen.

Sangeeta Gupta

And are you engaged in enabling all your employees to be able to use these tools, given, you know, some of the issues around governance risks that are being talked about?

Srikrishna Ramakarthikeyan

Yeah, 100%. Okay. I think it is kind of… A little bit silly to tell employees that you cannot use. We’ve already got… We are already in the second generation of retaining our employees on the air. I think first generation was whatever, on Gen AI and I would say even as of Jan last year, the whole concept of agentic came in, whatever you learn till that point becomes useless. And so we are doing that second generation of training. Now, what we found is that earlier we used to mandate training. We wanted everybody to learn and we were pushing employees to learn. Suddenly we stopped it. We said, hey, it’s up to you. The truth is, if you don’t learn, you are going to be redundant.

Yeah, so it’s not for us that we learn. It’s for you. And suddenly we’re finding that the number of people who are actually getting trained is more, not less, once you stop mandating it. So, yeah, I mean, are there privacy risks with Facebook? do people use it? The answer for both is yes. So I think you’re just going to find a generation of people who think about the resources here and very differently.

Sangeeta Gupta

So you know yesterday at the Impact Summit, the CEO of Anthropic spoke about I think what was the 100x geniuses in a data center, right? That’s the kind of intelligence at scale that will exist as these technologies really mature to a deployment and scale the deployment gap. How do you see the role of humans shifting and what is this human -AI collaboration that we are all talking about, right?

Srikrishna Ramakarthikeyan

See, the thing is this, I tell my customers this. Stop chasing the shiniest object. There is always going to be advancement in technology every month, every two months, every three months. Something better will come. And in the quest to keep chasing that, actually what you’re doing is not realizing value from anything. So, for me, most enterprises can get significant value if they fully adopt systematically capabilities that existed a year ago. Certainly capabilities that existed six months ago. So, what are the relevance of data center full of geniuses for most enterprises? I think it’s zero. What problems can it solve that enterprises… I think enterprise problems are not to do with IQ. It is far more complex than a linear IQ issue.

So, I think yes, it may be true that AI can do like a thousand things that humans can’t, but it’s not relevant. So I think the real focus is not about capability, about how do you help enterprise adopt and that is the real answer to your earlier problem, earlier question. What do people do if machines do coding? Actually the problem you are trying to solve is not writing code, you are trying to always solve for some other problem. I think that’s the re -skilling that engineers and young talent need to go through. For me now, AI knowledge is like English, it’s foundational, it’s fundamental. I need to be in the business of solving for something else.

And there I think the point you have made several times in terms of engineering, engineering and interdisciplinary I think is crucial. So how many times do you go to a doctor and get frustrated? Listen, I don’t want an eye doctor. I don’t want a nose doctor. I actually want a doctor. Right? And you know, that’s true in engineering. You think about robotics. You don’t want a mechanical engineer. You don’t want a software engineer. You don’t want an AI engineer. You don’t want an electrical engineer. You want an engineer. And I think that is where our talent needs to go. Now, frankly, I think academia has a big job to do to help them get there because our courses are not designed like this right now.

They’re designed as electrical and whatever else. But I think young talent who are reorienting themselves that, hey, AI is not the skill. AI is very foundational. But I’m going to use that to solve for something more meaningful. I think we’ll just be fine on workforce.

Sangeeta Gupta

yeah so if i can come to you right i think you’ve heard a lot about how learning has to change and uh you know whether it’s critical thinking that we’re talking about problem solving experiential use case based uh but at the same time you need access to data you need access to compute you need access to research right so how how are you think how how is uk thinking about this and you know are there examples that india can learn from from there

Sue Daley OBE

yeah absolutely so when we think about realizing the opportunities economic and social opportunities of ai it isn’t just about obviously skills skills is part of it but it’s it’s about it getting to use that word again the foundation’s right so in the uk particularly last year we focused a lot around um and a lot of initiatives a lot of investment has been put into getting the infrastructure right so whether that is looking at our data infrastructure um the uk government infrastructure right so whether that is looking at our data um announcing a national data library initiative to try and um announcing a national data library initiative to try and um announcing a national data library initiative to try and we have, well I was about to say we have huge data sets but you guys have massive data sets, but the data sets we have, how are we using them, how are we bringing them together, not just for public services and public sector use but potentially for industry use as well so data infrastructure absolutely, a lot of investments gone into compute infrastructure so the creation of AI growth zones so dedicated areas in the UK where perhaps we don’t have the compute infrastructure resource right now, how are we building that, part of it is also investment gone into AI, so an AI research resource, so dedicated computer resource compute power chips to allow AI researchers at that fundamental research level to do the work that they’re doing as well so absolutely a lot of focus and I think if I think about and if I reflect on the last, when 2025 in the UK yes the conversation was a lot about how do we get the foundations right, how are we getting the infrastructure right where I think and where I want the conversation to shift is to now adoption yes we’ve been talking about adoption there is already adoption happening in the UK whether it’s financial services, whether it’s in our healthcare system whether it’s transport, logistics but boy there’s so much potential completely agree and at Tech UK we’re really looking at how do we accelerate that AI adoption at pace and speed in a way that we don’t get it wrong from a governance, from an ethics, from a responsible from a regulation point of view absolutely and how do we get it right for people but how do we move quickly enough to realise the opportunity and that’s really really something that we’ll be advocating for more this year because again what can government do to help that but what can we as industry particularly the tech industry help other sectors and industries to understand how they can do that as well and that’s really our core mission of my work at Tech UK and I’m really excited about the future of Tech UK and I’m really excited about the future of Tech UK skills comes into it of course but also does public trust and confidence none of what we’re talking about here is going to really fly if people don’t trust and have confidence in using AI so there is, or having AI used about them so there’s lots of initiatives happening, compute infrastructure absolutely, access to data making sure that researchers have what they need, industry have what they need SMEs have what they need but skills is an integral part of that it’s all linked, it’s all connected but I completely agree adoption is really the key and I was at a UK, I had a reception last night, the High Commission and the Rishi Sunak, the previous Prime Minister was talking about which country will win the AI race, we’re talking about sovereignty we’re talking about the previous panel was talking about sovereignty is kind of key for India it’s key for a lot of countries and we’re looking at what does data what does tech, what does AI sovereignty mean for the UK but Ritchie Sunak’s point was like the countries that will win the race in AI are not the countries that are looking at sovereignty or looking at stack or looking at infrastructure it’s the countries that can demonstrate adoption and can win the race in adoption and that can integrate AI across all the sectors and across all your industry and your economy and definitely in the UK we’re very much tying digital AI adoption and deployment diffusion into society into our economy as a key driver of growth and productivity as well so lots going on but with that central core theme of how do we get this right as well.

Sangeeta Gupta

I fully agree I think getting deployment right is really the opportunity or challenge for economies that are not competing for the LLMs right so I think that that’s what India has to get right because AI can help solve to Keisha’s point we necessarily the shiniest toy is not needed for the enterprise it’s needed to solve India’s deep healthcare challenges it’s needed to solve some of our agriculture related issues right and I think that’s where the whole inclusion focus and what AI can do for you it really means.

Sue Daley OBE

I think sometimes we have to take a step back and just realize how transformational, how exciting this technology is. I mean, many of us have been talking about this for a number of years. But where we are in terms of compute infrastructure and compute power that we never had before, in terms of the digital data and the data sets that we’ve never really had before, I don’t know, I’m feeling quite this does feel like a step change. This does feel like a different moment in time. And it’s how do we grasp that moment in time, which I think is really important. How do we help young people and everybody working in the industry to understand what grasping this opportunity means for them as well?

Sangeeta Gupta

No, I think we’re reaching the end of our session, but I just want to get to the last session and quick comments from all of you, right? You know, what would be, Ravi, your top three priorities for business, academia, and government to successfully navigate this AI workforce transition? And, you know, what are some risks it should plan for?

Ravi Aurora

Great question. I think like, you know, the priorities, I think I mentioned, you know, to you about this whole interdisciplinary collaboration, the whole, you know, aspect around redesign and so forth, right? And I think in risks, I would also see, I think like, if I go to, you know, how what we’ve been talking around AI and how it has democratized access and so forth, right? But there is also the concentration risk that we have to be aware about, right? Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately because they have access to better data or compute and research ecosystems, right? Then I think we have to be very deliberate in how we design for systems.

Right? Right. I think this is where. You know, India, we have a position of strength because, you know, our engineers and you talked about the million plus engineers that are, you know, we are coming from a position of strength because India has contributed to the global technology revolution. You look at all the growth of our global capability centers, you know, kind of reflect that depth of the talent pool, you know, that exists, right? And I think that we have to, as we go forward, you know, get that, you know, design aspect right, right? Because foundational digital and AI literacy into school curriculum, right? Because equitable access to tools, infrastructure, right? Hands -on exposure across geographies, right?

So, and then also we have to go beyond top tier institutions to tier two. Tier two and tier three because other. Otherwise, again, we’ll come back to a concentration risk, you know, that will exist. And, you know, because we don’t need just people who can build AI. We can, we need folks, you know, and professionals who can build with AI, who can govern AI, and who can, and know when to override AI, right? So I think that’s kind of important. So, and we have to make sure that in terms of risk, we don’t go towards over -automation, you know, without adequate human oversight. And, you know, biases need to be taken into account because it should work well for both formal workers as well as informal workers, right?

Women entrepreneurs, you know, vernacular, because, and we talked about context and the contextual aspect of it, right? So we, otherwise, we risk exclusion at scale. And, you know, to Sri’s point, we want that inclusion that you talked about. You know, we have a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, You know, and I already talked about the…

I’m sorry, we know we are ending the session, so whoever is ringing the bell, please, we’ll be there on time. Yeah, okay. So, okay, so therefore, I mean, I’ll just conclude there that, you know, it is about this transformation that we need

Sangeeta Gupta

So I think you articulated it very well, right, the risk of concentration, the risk of exclusion, and obviously not doing it very thoughtfully, right? So I think those are very, very well articulated. So if I can come to you, right, what do you see both as from a workforce transition framework, what are our big opportunities and risks, right?

Sue Daley OBE

Yeah, I’m glad you could hear that bell as well. I thought it was so funny in my head. So the question in terms of priorities, so very quickly for businesses, so touching on some of the points you were making as well, embed lifelong learning we need to continuously learn we all do actually but also our organisations I think think about for businesses not just jobs and roles but tasks, what are organisations looking for people to do and I think also organisations need to think about the opportunities but the risks they need to invest in human skills along with technical skills, governance skills but for government as well we see something in the UK that we think should be prioritised and I don’t know if this will resonate with here in India but it’s interoperability of skills credentials so if I get a credential if we’re focusing on lifelong learning if I learn a skill, if I take a course, if I have a credential how is that transferable can that be recognised elsewhere because people will need to shift and people will need to move but also a national taxonomy of skills and perhaps requirements and fundamental foundational skills that we’re talking about?

Are we all talking the same language? Are we all talking about the same skills? Some priorities there, but I’ll leave it there.

Sangeeta Gupta

So a new skills taxonomy and interoperability of skills, I think that’s going to be very important in this environment. But technology is changing so fast, right? Because what was applicable last year is now going to be applicable this year. Keech, if we can come back to you for the closing comments. How are you seeing this?

Srikrishna Ramakarthikeyan

I’ll maybe just say one thing, okay? Sorry. I think inclusiveness has to be by design.

Sangeeta Gupta

Okay, we’re just ending. We said that we’re ending. It’s just 24 seconds, right? Yeah, why didn’t you just close that, Keech?

Srikrishna Ramakarthikeyan

If you look at it, internet is very inclusive. That’s because academia made something free. I think we need academia to do that for AI. that’s how it become more inclusive and I think this has to be a huge priority

Sangeeta Gupta

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. To welcome you all today for our session, Reimagining AI and STEM Education for India’s Next Generation. Celebrating the vision of Vixit Bharat and its grandeur, we are witnessing the AI revolution during the AI India Summit. With a young population and vibrant digital ecosystem and strong policy momentum, we are uniquely positioned to harness AI not only for the economic future,

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Sangeeta Gupta opened the panel focusing on AI and workforce transformation for young people in India”

The knowledge base lists Sangeeta Gupta as the panel moderator for the session titled “How AI Is Transforming India’s Workforce”, confirming her role and the AI-workforce focus [S1].

Additional Contextmedium

“AI adoption will likely remain in the low single‑digit‑percent range per year because organisations must embed context and governance, and AI does not yet “understand context””

A study of AI adoption notes that adoption remains uneven with far fewer organisations scaling beyond pilots, indicating generally low adoption rates and the importance of data readiness and governance gaps [S83].

Confirmedmedium

“Future workforce needs critical thinking, judgment capabilities, and empathy when working with machines – aligning with the four capabilities outlined by Ravi Aurora”

The knowledge base explicitly states that the future workforce will require critical thinking, judgment capabilities, and empathy when collaborating with machines, supporting the skill framework described in the report [S89].

Additional Contextlow

“Disruption from AI is driving workforce displacement and adoption will determine its impact”

The source remarks that technology capability leads adoption, and adoption will determine workforce displacement, providing broader context for the panel’s discussion of disruption [S81].

External Sources (89)
S1
How AI Is Transforming Indias Workforce for Global Competitivene — – Ravi Aurora- Srikrishna Ramakarthikeyan- Sue Daley OBE – Ravi Aurora- Sue Daley OBE – Ravi Aurora- Sue Daley OBE- Sr…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S5
How AI Is Transforming Indias Workforce for Global Competitivene — – Srikrishna Ramakarthikeyan- Sue Daley OBE
S6
How AI Is Transforming Indias Workforce for Global Competitivene — -Srikrishna Ramakarthikeyan- (Role/title not clearly specified, but appears to be from IT services sector based on discu…
S7
How AI Is Transforming Indias Workforce for Global Competitivene — Speakers:Ravi Aurora, Srikrishna Ramakarthikeyan Speakers:Ravi Aurora, Srikrishna Ramakarthikeyan, Sue Daley OBE Speak…
S8
How AI Is Transforming Indias Workforce for Global Competitivene — -Sangeeta Gupta- Panel moderator (role/title not specified in transcript)
S9
AI for Social Empowerment_ Driving Change and Inclusion — Sure. Let me just address the coding part of it. I think for over 15 years, the industry has been trying to explain to t…
S10
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S11
Building the Next Wave of AI_ Responsible Frameworks & Standards — yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and an…
S12
AI That Empowers Safety Growth and Social Inclusion in Action — So that was one. The OECD hyper -reporting framework, which came out of the Hiroshima AI process, is another very good t…
S13
Adobe, IBM, Nvidia, among tech companies signing Biden’s AI guidelines — Several major US tech companies, including giants such as Adobe, IBM, and Nvidia,have signedon to President Biden’s volu…
S14
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — The speaker argued for an understanding of disabilities as one of the many normal variations within human diversity, ass…
S15
Ministerial Roundtable — The discussion highlighted the importance of carefully understanding the opportunities presented by emerging technologie…
S16
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S17
Responsible AI for Children Safe Playful and Empowering Learning — Hall contends that understanding AI concepts should be considered as fundamental as traditional literacy skills. He beli…
S18
S19
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S20
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S21
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Jay argues that static compliance frameworks become outdated quickly and don’t necessarily provide real security. He adv…
S22
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Nobile provided a key reframing of AI governance challenges, arguing that “the debate is not humans versus machines but …
S23
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — I’ve got one last thing. We’re done. I’m so sorry. You’ve got to hear this. They’re giving me this. Does this mean that?…
S24
AI for Social Empowerment_ Driving Change and Inclusion — But AI is one really big disruption that comes on top of all the other disruptions, and there’s already plenty of eviden…
S25
Diplomatic policy analysis — Policy analysis serves as the backbone of diplomacy’s decision-making. It equips leaders and negotiators with the eviden…
S26
Redrawing the Geography of Jobs / Davos 2025 — Reskilling and upskilling workers is essential to adapt to changing job markets and technological advances
S27
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Augmentation vs. Automation Strategies Economic | Future of work | Development Large enterprises will retrain, reskill…
S28
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Tripti Sinha: Thank you, Amrita, I was just going to say exactly that. So as I said earlier, it is an ecosystem of colla…
S29
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Educational data is often not linked to financial data due to government silos. Private sector holds significant data th…
S30
WS #53 Leveraging the Internet in Environment and Health Resilience — Call for thinking globally and integrated in policy decisions; mention of ecosystem including public safety, emergency, …
S31
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — It aligns with the goals of reducing inequalities and fostering peace, justice, and strong institutions. Another importa…
S32
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S33
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S34
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S35
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S36
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S37
How AI Is Transforming Indias Workforce for Global Competitivene — Then, it comes to need for a very continuous learning mindset because the AI systems are evolving with data, right? And …
S38
How AI Is Transforming Indias Workforce for Global Competitivene — Sure, thank you very much and thanks to NASCOM great to be here on the panel with Sue and Sri Krishna so I think like I …
S39
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S40
Empowering Workers in the Age of AI — Several critical issues remain unresolved and require ongoing attention. The fundamental challenge of AI model bias pers…
S41
Indias AI Leap Policy to Practice with AIP2 — Summary:The main areas of disagreement center around governance approaches (regulatory vs. flexible frameworks), investm…
S42
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — The skill requirements are changing rapidly, making continuous learning and upskilling essential.
S43
Shaping the Future AI Strategies for Jobs and Economic Development — Continuous learning and upskilling will be essential for workforce adaptation to rapid technological change across all s…
S44
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Continuous learning and adaptability are essential for future workforce
S45
Laying the foundations for AI governance — High level of consensus on problem identification and broad solution directions, suggesting significant potential for co…
S46
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S47
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S48
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — A noteworthy observation from the analysis is the importance of governmental intervention and the involvement of private…
S49
WS #270 Understanding digital exclusion in AI era — The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technolo…
S50
Review of AI and digital developments in 2024 — Focus on extinction risk of 2023 shifted towards existing risk as you can from visualisaiton of analysis of media covera…
S51
Implications of emerging technology regulations on digital trade negotiations and development (Center for Economic and Policy Research) — The analysis also addresses concerns regarding the oversight of algorithms, asserting that they should not be exempt fro…
S52
Driving Social Good with AI_ Evaluation and Open Source at Scale — This panel discussion focused on the intersection of AI evaluation, open source software, and the challenges of maintain…
S53
Open Forum #46 Developing a Secure Rights Respecting Digital Future — Human rights | Development Need for institutional and normative structures to track technology impact, mechanisms for c…
S54
Artificial intelligence (AI) and the human condition — Despite its generous promise for improved efficiency, productivity, enhanced collective intelligence, and other goodies,…
S55
Driving Social Good with AI_ Evaluation and Open Source at Scale — The conversation then shifted to the growing problem of AI-generated code submissions to open source projects. Sanket Ve…
S56
Why AI coding tools may follow the path of past tech revolutions — In mid-2025, thedebateover AI in programming mirrors historic resistance to earlier breakthroughs in computing. Critics …
S57
How AI Is Transforming Indias Workforce for Global Competitivene — This phrase ‘anxiety into agency’ encapsulates a fundamental psychological and policy challenge of the AI transition. It…
S58
How AI Is Transforming Indias Workforce for Global Competitivene — This phrase ‘anxiety into agency’ encapsulates a fundamental psychological and policy challenge of the AI transition. It…
S59
Inclusive AI Starts with People Not Just Algorithms — Educational disruption emerged as both opportunity and challenge. While speakers agreed that traditional models may be i…
S60
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — And so I’ve talked about health, but this is the same for education, for financial services, logistics. And so I think t…
S61
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — Building solutions, such as energy-efficient technologies and sustainable design practices, have the potential to decrea…
S62
Redrawing the Geography of Jobs / Davos 2025 — Reskilling and upskilling workers is essential to adapt to changing job markets and technological advances
S63
Welcome Address — Modi stresses that AI will reshape jobs, requiring large‑scale skilling, reskilling, and lifelong learning initiatives. …
S64
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-indias-workforce-for-global-competitivene — And that’s been true for whatever, a decade. So I think that same question will become who codes now? And I don’t think …
S65
Setting the Rules_ Global AI Standards for Growth and Governance — Esther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and …
S66
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Tripti Sinha: Thank you, Amrita, I was just going to say exactly that. So as I said earlier, it is an ecosystem of colla…
S67
Closing remarks – Charting the path forward — Coherent and interoperable policy frameworks are needed to prevent fragmentation while enabling agile governance
S68
Workshop 5: Bridging Digital Inequalities and Challenges in Multicultural Societies — All speakers agreed that digital inclusion requires comprehensive, multi-faceted approaches rather than single solutions…
S69
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Need for multi-stakeholder and collaborative approaches to data collection Sustainability and Data Ecosystem Coordinati…
S70
Keynote-Julie Sweet — At the individual level, Sweet emphasises that people must recognise formal education as no longer sufficient for career…
S71
YCIG &amp; DTC: Future of Education and Work with advancing tech &amp; internet — Need for lifelong learning and adaptability Speakers agreed on the need to adapt education systems to include interdisc…
S72
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S73
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion concluded with broad agreement that successful AI integration requires coordinated action across multiple…
S74
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — Ahmed Elsabbagh: Thank you for inviting me to be part of your success story and your event here in IGF. My presentation …
S75
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Good afternoon. Let me just start by thanking Shri Prime Minister Modi Ji for getting all the AI thought leaders togethe…
S76
Digital solutions for sustainability: ICT’s role in GHG reduction and biodiversity protection — **Reyna Ubeda** from ITU’s standardisation sector introduced the session’s dual focus on digital solutions for sustainab…
S77
Policy Network on Internet Fragmentation | IGF 2023 — In conclusion, Internet fragmentation is a complex issue without a definitive definition. However, through the identific…
S78
Opening keynote — Bogdan-Martin framed the AI revolution as a pivotal moment for the current generation, calling it an opportunity to take…
S79
Internet Fragmentation: Perspectives &amp; Collaboration | IGF 2023 WS #405 — Furthermore, Green addresses the regulatory implications associated with internet fragmentation. She identifies data loc…
S80
Open Forum #73 The Need for Regulating Autonomous Weapon Systems — Ram Mohan: Thank you. This is one of those things where you need a village to use the microphones. So I wanted to focus…
S81
https://app.faicon.ai/ai-impact-summit-2026/how-ai-is-transforming-indias-workforce-for-global-competitivene — So I think the direction of travel is indisputable. That there is disruption. There’s a lot of there’s an issue of techn…
S82
AI Safety at the Global Level Insights from Digital Ministers Of — There are a few that really jump out to me with my background in national security. And Joshua, you spoke. You’ve spoken…
S83
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S84
AI adoption vs governance: A contradiction in Australian businesses — A study conducted by Datacom and engaged 318 business decision-makers working in Australian organisationshas unveiled a …
S85
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This comment provides a crucial conceptual distinction that reframes the entire discussion. Instead of asking how AI can…
S86
GermanAsian AI Partnerships Driving Talent Innovation the Future — Mr. Jaiswal uses the historical example of electricity to illustrate how disruptive technologies initially cause fear bu…
S87
One-Person Enterprise — Richard Socher argues that AI will lead to the creation of entirely new job categories that we currently cannot predict….
S88
Open Forum: A Primer on AI — Another concern is the potential impact of AI on the job market. As AI capabilities advance, certain professions may bec…
S89
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future workforce needs different skills including critical thinking, judgment capabilities, and empathy when working wit…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Srikrishna Ramakarthikeyan
8 arguments139 words per minute2207 words950 seconds
Argument 1
AI will reshape software engineering more than testing, making coding a low‑cost activity (Srikrishna Ramakarthikeyan)
EXPLANATION
He argues that AI’s impact will be greatest on software engineering, turning coding into a near‑zero‑cost activity. This shift will make traditional coding less valuable as AI can generate code automatically.
EVIDENCE
He first noted that testing was previously seen as the most affected service, but now he believes software engineering is the biggest disruption [34-37]. Later he emphasized that the cost of coding will become zero, allowing any problem to be solved with code that was previously too complex or expensive [165-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External commentary notes that coding will become a solved problem and that AI will turn coding into a near-zero-cost activity, supporting this view [S1][S9].
MAJOR DISCUSSION POINT
Scope and Impact of AI Disruption on Workforce
DISAGREED WITH
Sue Daley OBE
Argument 2
AI is expected to generate more jobs than it eliminates, creating new opportunities for technically savvy youth (Srikrishna Ramakarthikeyan)
EXPLANATION
He contends that AI will create far more employment opportunities than it displaces, especially for young people with technical skills. The value of AI lies in solving problems that were previously unsolvable, not in headcount reduction.
EVIDENCE
He described enormous opportunities for technically savvy youth, stating that AI’s real value is in problem-solving rather than reducing headcount, and that new skills are needed to realise this value [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel summaries highlight that AI is expected to create more employment than it displaces and that the narrative around job loss is being reframed as an opportunity [S7][S10].
MAJOR DISCUSSION POINT
Scope and Impact of AI Disruption on Workforce
Argument 3
Adoption of AI will be gradual because organizations need to embed context and governance, slowing immediate disruption (Srikrishna Ramakarthikeyan)
EXPLANATION
He explains that AI adoption will be slow as firms must integrate contextual understanding and governance frameworks before wide‑scale deployment. This need for careful implementation tempers the speed of disruption.
EVIDENCE
He pointed out that AI does not inherently understand context, which makes adoption slower, and that role redesign and governance are major reasons for the gradual rollout [246-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion points stress that lack of contextual understanding, governance needs and role redesign slow AI rollout, confirming a gradual adoption trajectory [S7][S1].
MAJOR DISCUSSION POINT
Scope and Impact of AI Disruption on Workforce
Argument 4
AI enables drastic team‑size reduction (e.g., from 8‑10 members to 3) and compresses development cycles (Srikrishna Ramakarthikeyan)
EXPLANATION
He notes that AI allows software squads to shrink dramatically, cutting the number of developers, testers and other roles. Development timelines are also compressed from weeks to days.
EVIDENCE
He gave a concrete example of a typical squad shrinking from 7-10 people to just three (product owner, developer, tester) and the development cycle dropping from two weeks to two days [238-239].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Specific examples cite squads shrinking to three people (product owner, developer, tester) and cycle times dropping dramatically [S1].
MAJOR DISCUSSION POINT
Role Redesign and Organizational Change
AGREED WITH
Ravi Aurora, Sue Daley OBE, Sangeeta Gupta
Argument 5
Companies are encouraging voluntary AI tool adoption, finding higher engagement when training is not mandatory (Srikrishna Ramakarthikeyan)
EXPLANATION
He observes that making AI training optional leads to greater employee participation than mandatory programs. This suggests that intrinsic motivation drives better uptake of AI tools.
EVIDENCE
He described how the organization stopped mandating AI training, shifted responsibility to employees, and subsequently saw an increase in the number of people getting trained voluntarily [255-268].
MAJOR DISCUSSION POINT
Role Redesign and Organizational Change
Argument 6
AI development must be inclusive by design, leveraging open academic resources to broaden access (Srikrishna Ramakarthikeyan)
EXPLANATION
He stresses that inclusiveness should be built into AI development, using freely available academic resources to ensure broad participation. This approach mirrors how the internet became inclusive through open academia.
EVIDENCE
He argued that the internet is inclusive because academia made it free, and called for a similar open-academic model for AI to achieve inclusiveness [365-373].
MAJOR DISCUSSION POINT
Inclusion, Equity, and Concentration Risks
AGREED WITH
Ravi Aurora, Sue Daley OBE
Argument 7
AI knowledge should be treated as a foundational literacy akin to English, essential for all citizens.
EXPLANATION
He argues that understanding AI will become as basic and universal as reading and writing English, making it a prerequisite skill for participation in the digital economy.
EVIDENCE
He states that AI knowledge is like English, describing it as “foundational” and “fundamental” for everyone, indicating that AI literacy will be a basic requirement for future work and life [291-293].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary argues that AI concepts should be as universal as traditional literacy, positioning AI literacy as a basic skill for everyone [S17][S18].
MAJOR DISCUSSION POINT
Scope and Impact of AI Disruption on Workforce
Argument 8
AI initiatives must be iterative and flexible, adapting quickly as technology evolves.
EXPLANATION
He notes that because AI technology changes rapidly, policies and programmes need to be continuously revised rather than fixed, avoiding reliance on a single “silver bullet”.
EVIDENCE
He says there is no silver bullet and stresses the need for an iterative, flexible approach that can adapt and shift as technology evolves [109-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources call for an iterative, flexible policy and programme approach to keep pace with rapid AI advances, rejecting static “silver bullet” solutions [S11][S21][S22].
MAJOR DISCUSSION POINT
The enabling environment for digital development
R
Ravi Aurora
13 arguments142 words per minute2127 words896 seconds
Argument 1
Professionals need system‑level judgment to assess AI model drift and high‑stakes decisions (Ravi Aurora)
EXPLANATION
He states that workers must be able to evaluate AI outputs, detect model drift, and make judgments in high‑risk, regulated environments. This system‑level judgment is essential for responsible AI use.
EVIDENCE
He explained that employees need to understand whether AI models are drifting and to intervene when decisions have high stakes, especially in regulated industries like payments [57-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ravi Aurora explicitly highlighted the need for system-level judgment to detect model drift and intervene in regulated, high-risk contexts [S1].
MAJOR DISCUSSION POINT
Emerging Skill Sets Required for AI‑Augmented Roles
AGREED WITH
Srikrishna Ramakarthikeyan, Sue Daley OBE, Sangeeta Gupta
Argument 2
Interdisciplinary fluency across engineering, regulation, risk, and user behavior is essential (Ravi Aurora)
EXPLANATION
He argues that AI challenges span multiple domains, so professionals must be fluent in engineering, regulatory, risk, and user‑behavior perspectives. Silos must be broken to address AI effectively.
EVIDENCE
He highlighted that AI challenges sit at the intersection of engineering, regulation, risk, and user behavior, requiring interdisciplinary expertise [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasized that AI challenges sit at the intersection of engineering, regulation, risk and user behavior, requiring interdisciplinary expertise [S1][S22].
MAJOR DISCUSSION POINT
Emerging Skill Sets Required for AI‑Augmented Roles
AGREED WITH
Sue Daley OBE
Argument 3
A continuous‑learning mindset is critical as AI models evolve with data (Ravi Aurora)
EXPLANATION
He emphasizes that AI systems continuously learn from new data, so the workforce must adopt a lifelong learning attitude to stay current. Ongoing upskilling is necessary to keep pace with evolving models.
EVIDENCE
He noted that AI systems evolve with data and that workers need to continuously learn from live environments and adapt models accordingly [67-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that AI systems continuously learn from new data, making lifelong learning essential for the workforce [S1].
MAJOR DISCUSSION POINT
Emerging Skill Sets Required for AI‑Augmented Roles
Argument 4
Deep contextual awareness, especially for multilingual environments like India, is required for effective AI interaction (Ravi Aurora)
EXPLANATION
He points out that AI must understand local languages, dialects, and cultural contexts to interact correctly with users. Without this awareness, AI outputs may miss intent or be inappropriate.
EVIDENCE
He described the need for AI agents to grasp context and intent across multiple Indian languages and dialects, stressing that models must be trained with contextual awareness [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for upskilling and knowledge sharing to ensure AI understands cultural and linguistic diversity, particularly in multilingual contexts such as India [S15].
MAJOR DISCUSSION POINT
Emerging Skill Sets Required for AI‑Augmented Roles
Argument 5
Organizations should focus on redesigning roles rather than only reskilling, aligning tasks with AI‑augmented capabilities (Ravi Aurora)
EXPLANATION
He argues that merely reskilling workers is insufficient; companies must redesign job roles to integrate AI capabilities. This ensures that AI augments tasks rather than simply replacing them.
EVIDENCE
He advocated for role redesign over pure reskilling, emphasizing that AI transforms tasks within jobs and that redesigning roles is essential for value capture [222-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Role redesign is identified as a key factor in AI adoption, with emphasis that merely reskilling is insufficient [S1].
MAJOR DISCUSSION POINT
Role Redesign and Organizational Change
AGREED WITH
Srikrishna Ramakarthikeyan, Sue Daley OBE, Sangeeta Gupta
Argument 6
Early integration of AI governance into product design makes product and engineering teams first‑line stewards of risk (Ravi Aurora)
EXPLANATION
He says that AI governance should be embedded from the start of product development, making engineering and product leaders the primary risk managers. This proactive approach reduces downstream compliance burdens.
EVIDENCE
He described Mastercard’s formal AI governance framework, including a chief AI and data governance officer, and explained that product and engineering leaders act as first-line stewards of AI risk rather than passive recipients of compliance decisions [191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mastercard’s formal AI governance framework places product and engineering leaders as first-line risk stewards, illustrating early integration of governance [S1].
MAJOR DISCUSSION POINT
Governance, Ethics, and Responsible AI Implementation
Argument 7
Curriculum redesign should embed AI governance, interdisciplinary collaboration, and be open to non‑CS disciplines (Ravi Aurora)
EXPLANATION
He calls for university curricula to incorporate AI governance and interdisciplinary work, extending beyond computer‑science majors. This prepares graduates for AI‑augmented roles across sectors.
EVIDENCE
He argued that curriculum design should not be limited to CS majors, should embed AI governance and interdisciplinary collaboration, and should reflect real-world AI challenges [217-219].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He advocated for curricula that go beyond computer-science majors and embed AI governance and interdisciplinary work [S1].
MAJOR DISCUSSION POINT
Education, Upskilling, and Training Initiatives
Argument 8
Mastercard has instituted a formal AI governance framework with a chief AI and data governance officer and a privacy‑by‑design approach (Ravi Aurora)
EXPLANATION
He outlines Mastercard’s comprehensive AI governance structure, featuring dedicated leadership and a privacy‑by‑design philosophy. This framework operationalizes responsible AI across the organization.
EVIDENCE
He listed Mastercard’s AI governance framework, the appointment of a chief AI and data governance officer, a chief privacy officer, and a privacy-by-design approach integrated across data, science, product, legal, compliance and engineering teams [191-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel described Mastercard’s governance structure, including a chief AI and data governance officer and a privacy-by-design philosophy [S1].
MAJOR DISCUSSION POINT
Governance, Ethics, and Responsible AI Implementation
Argument 9
Effective AI governance requires interdisciplinary expertise spanning law, technology, ethics, and operations (Ravi Aurora, Sue Daley OBE)
EXPLANATION
He (and Sue) stress that governing AI responsibly demands knowledge across legal, technical, ethical, and operational domains. Interdisciplinary teams are needed to manage AI risk holistically.
EVIDENCE
Ravi highlighted the need for interdisciplinary governance that blends law, technology, ethics, and operations [183-186]; Sue similarly noted that AI governance must involve multiple disciplines and cannot operate in silos [173-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both speakers stressed that responsible AI governance demands interdisciplinary teams covering legal, technical, ethical and operational domains [S22].
MAJOR DISCUSSION POINT
Governance, Ethics, and Responsible AI Implementation
AGREED WITH
Sue Daley OBE
Argument 10
Concentration risk arises when a few institutions or firms control data, compute, and talent, potentially marginalizing others (Ravi Aurora)
EXPLANATION
He warns that dominance by a small set of organizations over data, compute power, and talent creates systemic concentration risk, limiting broader participation in AI development.
EVIDENCE
He identified concentration risk as a danger when a limited number of companies or institutions have disproportionate access to data, compute, and research ecosystems, which could marginalize other players [326-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive AI requires addressing inequalities in data, compute and talent access, highlighting concentration risk when power is held by a few players [S16].
MAJOR DISCUSSION POINT
Inclusion, Equity, and Concentration Risks
Argument 11
Risk of exclusion exists if AI education and tools are limited to elite tier‑1 institutions, underscoring the need for equitable access (Ravi Aurora)
EXPLANATION
He points out that focusing AI training only on top‑tier universities excludes large segments of talent, creating an equity gap. Broadening access to tier‑2 and tier‑3 institutions is essential.
EVIDENCE
He emphasized the need to go beyond top-tier institutions and include tier-2 and tier-3 colleges to avoid concentration risk and ensure inclusive AI talent development [337-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to broaden AI education beyond top-tier institutions to avoid exclusion and concentration risk is emphasized [S16].
MAJOR DISCUSSION POINT
Inclusion, Equity, and Concentration Risks
AGREED WITH
Srikrishna Ramakarthikeyan, Sue Daley OBE
Argument 12
Governance gaps arise from uneven digital capability across institutions, requiring targeted capacity‑building interventions.
EXPLANATION
He points out that larger enterprises can attract talent and build AI capability, while smaller firms cannot, creating governance gaps that must be addressed to ensure safe AI deployment.
EVIDENCE
He explains that uneven digital capability across institutions leads to governance gaps, especially where AI is expanding the most, and calls for solutions to bridge these gaps [186-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Uneven digital capability creates governance gaps, especially for smaller firms, calling for capacity-building measures [S16].
MAJOR DISCUSSION POINT
Governance, Ethics, and Responsible AI Implementation
Argument 13
Building an inclusive and distributed talent pipeline, especially for MSMEs and last‑mile firms, is crucial for equitable AI adoption.
EXPLANATION
He emphasizes the need to extend AI talent development beyond large centers to smaller enterprises and regional players, ensuring that AI benefits are widely shared.
EVIDENCE
He describes working with MSMEs and the “last mile” to bring contextual challenges into product design, highlighting the importance of an inclusive and distributed talent pipeline [226-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights the importance of extending AI talent pipelines to MSMEs and regional players to ensure inclusive adoption [S16].
MAJOR DISCUSSION POINT
Inclusion, Equity, and Concentration Risks
S
Sue Daley OBE
10 arguments184 words per minute2940 words957 seconds
Argument 1
Automation will replace many admin and cognitive tasks, freeing humans for problem‑solving and client‑facing work (Sue Daley OBE)
EXPLANATION
She observes that AI is automating routine administrative and cognitive activities, which allows workers to focus on higher‑value problem solving and client advisory roles. This shift also demands new skill sets beyond pure technical ability.
EVIDENCE
She noted that many roles are moving from admin and cognitive tasks to problem-solving, client-facing, and AI governance work, highlighting the transition from automation to added human value [91-95].
MAJOR DISCUSSION POINT
Scope and Impact of AI Disruption on Workforce
AGREED WITH
Srikrishna Ramakarthikeyan, Ravi Aurora, Sangeeta Gupta
Argument 2
Human “soft” skills—client interaction, social intelligence—remain vital despite automation (Sue Daley OBE)
EXPLANATION
She stresses that even as automation grows, interpersonal and social skills are essential for client‑facing positions. These “human” skills differentiate workers from machines.
EVIDENCE
She described the need for people to bring added value through human skills such as interaction and social intelligence, especially as jobs shift toward client-facing and advisory roles [92-95].
MAJOR DISCUSSION POINT
Emerging Skill Sets Required for AI‑Augmented Roles
Argument 3
The UK AI Skills Partnership aims to upskill over one million people through coordinated training and conversion courses (Sue Daley OBE)
EXPLANATION
She explains that the UK government’s AI Skills Partnership targets training more than one million individuals, using coordinated programs and conversion courses for non‑AI graduates. This initiative aims to build a broad AI‑ready workforce.
EVIDENCE
She cited the AI Skills Partnership’s goal to train over one million people and mentioned one-year conversion courses for university graduates without an AI degree [95-98].
MAJOR DISCUSSION POINT
Education, Upskilling, and Training Initiatives
AGREED WITH
Srikrishna Ramakarthikeyan, Ravi Aurora, Sangeeta Gupta
Argument 4
TechSkills Gold accreditation creates a bridge between universities and employers, ensuring curricula match industry needs (Sue Daley OBE)
EXPLANATION
She outlines the TechSkills Gold Accreditation, which aligns university programmes with employer requirements, ensuring graduates possess the skills demanded by industry. This creates a recognized credential for hiring.
EVIDENCE
She described TechSkills Gold Accreditation as a degree recognised by employers, developed through collaboration between universities and technology companies to match industry needs [115-118].
MAJOR DISCUSSION POINT
Education, Upskilling, and Training Initiatives
Argument 5
Human‑in‑the‑loop oversight remains necessary for code verification and ethical assurance (Sue Daley OBE)
EXPLANATION
She argues that even with AI‑generated code, humans must review and verify outputs to ensure correctness and ethical compliance. This oversight is a critical component of responsible AI deployment.
EVIDENCE
She discussed the need for people to check AI-generated code, noting that governance and assurance require a human in the loop to validate AI checks [175-178].
MAJOR DISCUSSION POINT
Governance, Ethics, and Responsible AI Implementation
AGREED WITH
Ravi Aurora
Argument 6
Effective AI governance requires interdisciplinary expertise spanning law, technology, ethics, and operations (Ravi Aurora, Sue Daley OBE)
EXPLANATION
She emphasizes that governing AI responsibly demands collaboration across legal, technical, ethical, and operational domains. Interdisciplinary teams are essential to manage AI risk comprehensively.
EVIDENCE
She highlighted that AI challenges cannot be addressed in silos and that interdisciplinary fluency is needed for governance, echoing similar points made by Ravi Aurora [173-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both speakers stressed that responsible AI governance demands interdisciplinary teams covering legal, technical, ethical and operational domains [S22].
MAJOR DISCUSSION POINT
Governance, Ethics, and Responsible AI Implementation
AGREED WITH
Ravi Aurora
Argument 7
Building public trust through transparent, responsible AI deployment is a prerequisite for widespread adoption (Sue Daley OBE)
EXPLANATION
She states that trust and confidence are essential for the public to accept AI technologies. Transparent, responsible deployment builds that trust and enables broader AI uptake.
EVIDENCE
She repeatedly stressed that trust is a cornerstone of AI adoption, noting that without public confidence AI initiatives will not succeed [82-84] and later reiterated the importance of trust and confidence in AI deployment […].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources note that trust and transparency are essential for public acceptance of AI, and that policies must be adaptable to maintain confidence [S21][S22].
MAJOR DISCUSSION POINT
Governance, Ethics, and Responsible AI Implementation
Argument 8
Interoperability of skill credentials and a national taxonomy are needed to ensure mobility and recognition across sectors (Sue Daley OBE)
EXPLANATION
She calls for a unified system of skill credentials and a national taxonomy so that learning outcomes are portable across industries and geographies. This would facilitate workforce mobility and lifelong learning.
EVIDENCE
She mentioned the need for interoperable skill credentials, a national taxonomy of skills, and common language for skills to ensure transferability and recognition across sectors [355-359].
MAJOR DISCUSSION POINT
Education, Upskilling, and Training Initiatives
Argument 9
National data and compute infrastructure, such as a data library and AI growth zones, are prerequisite foundations for scaling AI adoption.
EXPLANATION
She argues that without robust data repositories and dedicated compute resources, both public and private sectors cannot effectively develop or deploy AI solutions at scale.
EVIDENCE
She references the UK’s national data library initiative and the creation of AI growth zones to provide the data and compute infrastructure needed for AI research and industry use [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK’s national data library and AI growth zones are cited as critical infrastructure for scaling AI across sectors [S15].
MAJOR DISCUSSION POINT
Information and communication technologies for development
Argument 10
AI programmes need an iterative, flexible approach because rapid technological change can quickly render policies obsolete.
EXPLANATION
She stresses that AI policy and initiatives must be continuously reviewed and adapted, as new advances can outpace static frameworks.
EVIDENCE
She repeats that there is no silver bullet and that an iterative, flexible approach is essential to keep pace with evolving AI technology [109-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary stresses that AI programmes must be continuously reviewed and adapted to keep pace with fast-moving technology [S21][S22].
MAJOR DISCUSSION POINT
The enabling environment for digital development
S
Sangeeta Gupta
5 arguments137 words per minute1828 words796 seconds
Argument 1
A whole‑of‑government, coordinated approach is needed to avoid fragmented efforts and ensure inclusive policy (Sangeeta Gupta)
EXPLANATION
She points out that India’s current AI policy landscape is fragmented across multiple governments and organisations, lacking a unified national strategy. A coordinated, whole‑of‑country approach would improve inclusivity and effectiveness.
EVIDENCE
She highlighted the disaggregated focus among various Indian governments, NASSCOM and others, and asked whether the UK has an integrated approach, emphasizing the need for a whole-government strategy [108-110].
MAJOR DISCUSSION POINT
Education, Upskilling, and Training Initiatives
AGREED WITH
Sue Daley OBE
Argument 2
Coordinated, whole‑country strategies are essential to manage AI’s societal impact and avoid fragmented policy (Sangeeta Gupta)
EXPLANATION
She reiterates that large‑scale AI disruption requires a unified national plan rather than isolated initiatives. Such coordination can better address societal impacts and ensure equitable outcomes.
EVIDENCE
She again referenced India’s fragmented policy environment and asked about integrated approaches, underscoring the necessity for a coordinated national strategy [108-110].
MAJOR DISCUSSION POINT
Inclusion, Equity, and Concentration Risks
Argument 3
AI creates anxiety among youth; demystifying AI and providing clear guidance is needed to turn fear into agency.
EXPLANATION
She highlights that the rapid rise of AI is causing apprehension among young people, and stresses the importance of education and communication to empower them.
EVIDENCE
She notes that AI is “creating a lot of anxiety amongst the youth” and calls for decoding what AI really means to help them navigate the shift [10-11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External analysis points out that anxiety about AI-driven job displacement is common and that reframing the narrative can empower young people [S10].
MAJOR DISCUSSION POINT
Scope and Impact of AI Disruption on Workforce
Argument 4
Education curricula at school and university levels must be redesigned to embed AI concepts, interdisciplinary learning, and problem‑solving skills.
EXPLANATION
She argues that traditional curricula are insufficient for the AI era and that learning systems need to incorporate AI literacy, cross‑disciplinary fluency, and real‑world problem orientation.
EVIDENCE
She states that “there’s a lot of change that is needed at a college level and school level on how, you know, even how you’re learning” to prepare students for the AI-driven world [73-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for AI literacy as a basic skill and for curricula that integrate AI concepts and interdisciplinary problem-solving are documented [S17][S15].
MAJOR DISCUSSION POINT
Education, Upskilling, and Training Initiatives
Argument 5
AI should be applied to sector‑specific challenges in India, such as healthcare and agriculture, to deliver inclusive societal benefits.
EXPLANATION
She emphasizes that AI’s value lies in solving concrete national problems rather than chasing the latest technology, urging focus on areas like health and farming.
EVIDENCE
She remarks that AI can help solve deep healthcare challenges and agriculture-related issues in India, highlighting the need for sector-focused applications [272-274].
MAJOR DISCUSSION POINT
Social and Economic Development
Agreements
Agreement Points
AI will transform the workforce and requires large‑scale upskilling, continuous learning and new skill sets.
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora, Sue Daley OBE, Sangeeta Gupta
AI will create more jobs than it eliminates, creating new opportunities for technically savvy youth (Srikrishna Ramakarthikeyan) Professionals need system‑level judgment to assess AI model drift and high‑stakes decisions (Ravi Aurora) Interdisciplinary fluency across engineering, regulation, risk, and user behavior is essential (Ravi Aurora) The UK AI Skills Partnership aims to upskill over one million people through coordinated training and conversion courses (Sue Daley OBE) AI creates anxiety among youth; demystifying AI and providing clear guidance is needed to turn fear into agency (Sangeeta Gupta)
All speakers agreed that AI will cause significant workforce disruption but also generate new opportunities, making large-scale upskilling, continuous learning and interdisciplinary skill development essential to prepare youth and address anxiety about job loss [45-48][57-70][95-98][10-12][73-74].
POLICY CONTEXT (KNOWLEDGE BASE)
The claim aligns with multiple analyses that continuous learning and large-scale upskilling are essential for AI-driven workforce change, as highlighted in India’s AI workforce discussions [S37] and broader sector-wide skill studies [S42][S43][S44].
AI adoption will require redesign of roles and can dramatically shrink team sizes and development cycles.
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora, Sue Daley OBE, Sangeeta Gupta
AI enables drastic team‑size reduction (e.g., from 8‑10 members to 3) and compresses development cycles (Srikrishna Ramakarthikeyan) Organizations should focus on redesigning roles rather than only reskilling, aligning tasks with AI‑augmented capabilities (Ravi Aurora) Automation will replace many admin and cognitive tasks, freeing humans for problem‑solving and client‑facing work (Sue Daley OBE) Question about what this means for a young software professional (Sangeeta Gupta)
Speakers concurred that AI will reshape job structures, allowing smaller squads and faster delivery, and that companies must redesign roles to capture AI-augmented value rather than rely solely on reskilling [238-239][222-226][91-95][44].
POLICY CONTEXT (KNOWLEDGE BASE)
UK scale-up hiring data show anticipated job cuts and slower recruitment as AI automates tasks, supporting role redesign and smaller teams [S39]; similar concerns about accelerated software cycles are raised in debates on AI coding tools [S56].
Interdisciplinary fluency and governance are critical for responsible AI deployment.
Speakers: Ravi Aurora, Sue Daley OBE
Interdisciplinary fluency across engineering, regulation, risk, and user behavior is essential (Ravi Aurora) Human‑in‑the‑loop oversight remains necessary for code verification and ethical assurance (Sue Daley OBE) Effective AI governance requires interdisciplinary expertise spanning law, technology, ethics, and operations (Ravi Aurora, Sue Daley OBE)
Both speakers highlighted that AI challenges cross technical, regulatory, risk and ethical domains, requiring interdisciplinary teams and human oversight to ensure responsible AI use [64-66][173-176][183-186].
POLICY CONTEXT (KNOWLEDGE BASE)
International bodies (EU, OECD, UNESCO, UNICEF, ILO, WHO) advocate interdisciplinary approaches to AI governance [S35], and a high level of consensus on coordinated governance frameworks is documented in recent policy analyses [S45][S46].
Inclusiveness and equitable access to AI tools, data and education must be built into AI strategies to avoid concentration and exclusion risks.
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora, Sue Daley OBE
AI development must be inclusive by design, leveraging open academic resources to broaden access (Srikrishna Ramakarthikeyan) Risk of exclusion exists if AI education and tools are limited to elite tier‑1 institutions, underscoring the need for equitable access (Ravi Aurora) Turn anxiety into agency and ensure inclusive pathways for all workers, avoiding exclusion at scale (Sue Daley OBE)
All three speakers warned that without deliberate inclusive policies, AI could concentrate power in a few firms or institutions, marginalising large segments of talent; they called for open-academic models, broader education outreach and inclusive upskilling programmes [365-373][345-348][98-104].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports emphasize the risk of digital exclusion and call for inclusive AI policies that address infrastructure, education and equitable data access [S47][S48][S49][S50].
A coordinated, whole‑of‑government (or whole‑of‑country) approach is needed to align policy, education and industry efforts on AI.
Speakers: Sangeeta Gupta, Sue Daley OBE
A whole‑of‑government, coordinated approach is needed to avoid fragmented efforts and ensure inclusive policy (Sangeeta Gupta) The UK AI Skills Partnership and TechSkills bridge between government, industry and academia to create a unified strategy (Sue Daley OBE)
Both speakers stressed that fragmented, siloed initiatives hinder effective AI deployment and that a unified national strategy linking government, industry and academia is essential [108-110][95-98][112-115].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy roadmaps and governance studies recommend whole-of-government coordination as a best practice for AI strategy alignment [S45][S46][S41].
Similar Viewpoints
Both emphasize that AI governance cannot be siloed; interdisciplinary expertise and human oversight are required to manage risk and ensure ethical outcomes [64-66][173-176].
Speakers: Ravi Aurora, Sue Daley OBE
Interdisciplinary fluency across engineering, regulation, risk, and user behavior is essential (Ravi Aurora) Human‑in‑the‑loop oversight remains necessary for code verification and ethical assurance (Sue Daley OBE)
Both view AI literacy and contextual understanding as fundamental prerequisites for future work, likening AI knowledge to basic literacy and stressing language/context awareness [291-293][71-74].
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora
AI knowledge should be treated as a foundational literacy akin to English (Srikrishna Ramakarthikeyan) Deep contextual awareness, especially for multilingual environments like India, is required for effective AI interaction (Ravi Aurora)
Both recognize youth anxiety about AI and propose empowerment through education and upskilling to convert fear into proactive agency [10-12][98-104].
Speakers: Sangeeta Gupta, Sue Daley OBE
AI creates anxiety among youth; demystifying AI and providing clear guidance is needed to turn fear into agency (Sangeeta Gupta) Turn anxiety into agency by encouraging people to take lead, continuous upskilling and reskilling (Sue Daley OBE)
Unexpected Consensus
Both UK and Indian speakers stressed that there is no single ‘silver bullet’ solution for AI workforce transformation and that policies must be iterative and flexible.
Speakers: Sue Daley OBE, Srikrishna Ramakarthikeyan
AI initiatives must be iterative and flexible, adapting quickly as technology evolves (Sue Daley OBE) AI initiatives must be iterative and flexible, adapting quickly as technology evolves (Srikrishna Ramakarthikeyan – implied by his emphasis on rapid change and need for adaptable approaches)
Despite representing different regions, both highlighted the futility of a one-off solution and called for continuous, adaptable policy frameworks, which was not an obvious point of convergence given their differing contexts [109-111][109-111].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions in India’s AI policy arena highlight divergent governance models and the need for adaptable, iterative frameworks, reflecting the ‘no silver bullet’ perspective [S41]; similar iterative approaches are advocated in AI governance consensus documents [S45].
Overall Assessment

The panel showed strong consensus that AI will reshape work, demanding massive upskilling, role redesign, interdisciplinary governance, inclusive access, and coordinated national strategies. Speakers from India, the UK and industry aligned on the need for continuous learning, role transformation, and inclusive policy frameworks.

High consensus across most thematic areas, indicating a shared understanding that successful AI integration hinges on education, governance, and coordinated policy rather than technology alone.

Differences
Different Viewpoints
Future of coding and need for human oversight
Speakers: Srikrishna Ramakarthikeyan, Sue Daley OBE
AI will reshape software engineering more than testing, making coding a low‑cost activity (Srikrishna Ramakarthikeyan) Human‑in‑the‑loop oversight remains necessary for code verification (Sue Daley OBE)
Srikrishna argues that AI will drive the cost of coding to zero, turning coding into a solved problem and eliminating the need for human coders [165-168]. Sue counters that even with AI-generated code, humans must still check and verify the output to ensure correctness and ethical compliance [175-178].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses warn that AI-generated code can introduce quality and oversight challenges, underscoring the necessity of human supervision in software development [S54][S55][S56].
Unexpected Differences
Necessity of human coders after AI automation
Speakers: Srikrishna Ramakarthikeyan, Sue Daley OBE
AI will reshape software engineering more than testing, making coding a low‑cost activity (Srikrishna Ramakarthikeyan) Human‑in‑the‑loop oversight remains necessary for code verification (Sue Daley OBE)
While many participants discussed role redesign and skill shifts, the stark contrast between Srikrishna’s view that coding will become a zero‑cost, solved problem and Sue’s insistence on continued human verification of AI‑generated code was not anticipated. This reveals a deeper tension about the extent to which automation can replace traditional programming roles.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI coding tools highlight ongoing questions about the continued relevance of human programmers, citing concerns over code quality and a potential coder-user divide [S54][S55][S56].
Overall Assessment

The panel shows strong consensus on the need for interdisciplinary skills, role redesign, and addressing youth anxiety, but a notable disagreement exists on whether coding will become obsolete or still require human oversight. Additional nuanced differences appear around coordination mechanisms and the risk of concentration, though these do not fracture the overall shared vision.

Moderate disagreement: primarily centered on the future role of human coders and the extent of coordination needed. The disagreement suggests that policy and industry strategies must balance optimism about AI‑driven automation with safeguards for human oversight and inclusive implementation.

Partial Agreements
Both speakers agree that a coordinated, whole‑of‑government strategy is essential for effective AI workforce transition. Sangeeta stresses the current fragmentation in India and calls for an integrated approach [108-110], while Sue points to existing UK initiatives (AI Skills Partnership) but acknowledges the need for deeper integration and coordination across sectors [95-98][109-112].
Speakers: Sangeeta Gupta, Sue Daley OBE
A whole‑of‑government, coordinated approach is needed to avoid fragmented efforts (Sangeeta Gupta) The UK AI Skills Partnership aims to upskill over one million people, but further integration across government, industry and education is required (Sue Daley OBE)
Both aim for an inclusive AI‑driven workforce, but Ravi warns that concentration of resources could undermine inclusivity and calls for deliberate design to avoid exclusion [326-329][337-340], whereas Srikrishna emphasizes the abundant job creation potential of AI for young talent [45-48]. The shared goal of broad opportunity is clear, yet they differ on the perceived risk and necessary safeguards.
Speakers: Ravi Aurora, Srikrishna Ramakarthikeyan
Concentration risk arises when a few institutions or firms control data, compute, and talent, potentially marginalizing others (Ravi Aurora) AI is expected to generate more jobs than it eliminates, creating new opportunities for technically savvy youth (Srikrishna Ramakarthikeyan)
Takeaways
Key takeaways
AI will reshape software engineering more than testing, making coding a low‑cost activity while creating new problem‑solving opportunities. Overall AI is expected to generate more jobs than it eliminates, but the transition will be gradual due to the need for context, governance and organizational redesign. Emerging roles require system‑level judgment, interdisciplinary fluency, continuous‑learning mindset and deep contextual awareness, especially for multilingual environments like India. Human soft skills (client interaction, social intelligence) remain essential even as many admin and cognitive tasks become automated. Role redesign (smaller, cross‑functional squads, compressed development cycles) is more critical than simple reskilling; AI governance must be embedded early in product design. Education must move beyond CS‑only tracks to include AI governance, interdisciplinary collaboration, and lifelong learning; curricula need to be aligned with industry needs. Coordinated, whole‑of‑government approaches are needed to avoid fragmented efforts; partnerships among industry, academia and government are essential. Responsible AI governance (privacy‑by‑design, human‑in‑the‑loop, interdisciplinary oversight) is a prerequisite for public trust and widespread adoption. Inclusion by design is vital to prevent concentration of data, compute and talent in a few elite institutions; equitable access to tools, data and training is required.
Resolutions and action items
Support and expand the UK AI Skills Partnership aiming to upskill over one million people and provide conversion courses for non‑AI graduates. Promote TechSkills Gold accreditation to align university curricula with employer needs and create interoperable skill credentials. Mastercard to continue operationalising its AI governance framework, including the roles of Chief AI & Data Governance Officer and Chief Privacy Officer, and to embed governance early in product development. Encourage voluntary AI‑tool adoption within organizations, noting higher engagement when training is not mandatory. Develop a national AI skills taxonomy and mechanisms for credential interoperability across sectors and geographies. Invest in foundational AI infrastructure (data libraries, compute zones) to enable equitable access for research and industry. Design role‑redesign pilots that reduce team sizes and compress delivery cycles while integrating AI tools. Foster interdisciplinary collaboration platforms among industry, academia and government to co‑design curricula and upskilling programs.
Unresolved issues
Exact timeline and mechanisms for large‑scale adoption of AI in enterprises, given contextual and governance challenges. How to ensure junior employees acquire deep organisational context when many entry‑level roles are automated. Specific strategies to prevent concentration risk where only a few institutions control data, compute and talent. Details on how tier‑2 and tier‑3 colleges can be effectively integrated into AI upskilling pipelines. Metrics and governance structures to monitor the effectiveness of AI governance frameworks across diverse industries. Approaches to continuously update curricula as AI capabilities evolve year‑on‑year.
Suggested compromises
Adopt an iterative, flexible policy and implementation approach that can adapt as AI technology evolves (as noted by Sue Daley). Combine mandatory baseline training with voluntary advanced upskilling, leveraging higher engagement when participation is self‑driven (as described by Srikrishna Ramakarthikeyan). Balance focus on elite institutions with broader outreach to tier‑2/3 colleges to mitigate exclusion while still leveraging high‑quality talent pools. Integrate both technical AI skills and human soft skills in education and workforce programs, recognizing that automation frees time for problem‑solving and client‑facing activities.
Thought Provoking Comments
Software engineering is the most disrupted area, even more than testing or infra management.
Shifts the focus from the commonly‑cited testing disruption to core development work, highlighting where AI impact will be deepest.
Prompted the moderator to ask how a fresh graduate should view the future, steering the conversation toward career implications for young engineers.
Speaker: Srikrishna Ramakarthikeyan
The real value of AI is not in reducing head‑count; it’s in being able to solve problems that we could not solve before.
Reframes AI from a cost‑cutting tool to a problem‑solving catalyst, challenging the fear‑based narrative around job loss.
Led to a more optimistic tone, encouraging other panelists (Ravi and Sue) to discuss new skill sets and opportunities rather than just displacement.
Speaker: Srikrishna Ramakarthikeyan
We need ‘system‑level judgment’, interdisciplinary fluency, a continuous learning mindset and deep contextual awareness to work with AI in high‑stakes, regulated environments.
Identifies concrete, non‑technical competencies that will become essential, moving the discussion from generic upskilling to specific capability gaps.
Shifted the dialogue toward concrete workforce design, prompting Sue to talk about the UK’s AI Skills Partnership and Srikrishna to compare policy approaches.
Speaker: Ravi Aurora
We must turn anxiety into agency – help people take the lead, upskill continuously, and move from fear of displacement to ownership of their career path.
Introduces a psychological dimension to the workforce transition, emphasizing empowerment rather than merely technical training.
Created a turning point where the conversation moved from technical disruption to the human experience, influencing later remarks on reskilling and mental models.
Speaker: Sue Daley OBE
In the US the debate is about regulation; in the UK (and elsewhere) the focus is on inclusiveness – making AI work for everyone. That difference will lead to materially different outcomes.
Highlights a strategic policy divergence that affects how AI adoption will be shaped globally, adding a geopolitical layer to the discussion.
Prompted the moderator to ask about coordinated approaches in India, and sparked Sue’s explanation of the UK’s multi‑stakeholder AI Skills Partnership.
Speaker: Srikrishna Ramakarthikeyan
The cost of coding will become zero – AI will solve problems, not just write code. The real advantage is being able to tackle far more complex challenges.
Challenges the assumption that AI will replace programmers, instead suggesting a paradigm shift where coding becomes a commodity.
Triggered Sue’s comment about the need for humans to audit AI‑generated code, deepening the debate on governance and the evolving role of engineers.
Speaker: Srikrishna Ramakarthikeyan
AI transforms tasks within jobs rather than eliminating whole roles; we should focus on role redesign, not just reskilling.
Moves the conversation from a simplistic view of job loss to a nuanced perspective on how work will be reorganised.
Guided Srikrishna to discuss squad size reductions and adoption speed, and led Sue to talk about curriculum redesign and interdisciplinary education.
Speaker: Ravi Aurora
Adoption will be slow – low single‑digit percentage impact per year – because AI doesn’t understand context and role redesign is complex.
Provides a realistic counter‑balance to the hype, grounding expectations about AI’s rollout in practical constraints.
Tempered earlier optimism, prompting participants to discuss concrete steps (e.g., governance frameworks, training programs) to accelerate adoption.
Speaker: Srikrishna Ramakarthikeyan
There is a concentration risk: a few institutions or firms could pull ahead with data, compute and talent, leaving others behind.
Identifies a systemic risk that could exacerbate inequality, expanding the discussion beyond skills to ecosystem design.
Led Sue to stress the need for interoperable skill credentials and a national taxonomy, and reinforced Srikrishna’s call for inclusive AI by design.
Speaker: Ravi Aurora
Interoperability of skills credentials and a national taxonomy are essential so that lifelong learning can be portable across sectors and geographies.
Proposes a structural solution to the upskilling challenge, moving from ad‑hoc training to a standardized ecosystem.
Influenced the closing remarks, with participants echoing the need for coordinated frameworks and inclusive design.
Speaker: Sue Daley OBE
Inclusiveness has to be by design – academia should make AI as free and open as the internet to ensure broad access.
Calls for a systemic, design‑first approach to democratise AI, tying together earlier themes of policy, education and equity.
Served as a concluding rallying point, reinforcing the panel’s consensus on the importance of inclusive, coordinated action.
Speaker: Srikrishna Ramakarthikeyan
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved it from a generic overview of AI disruption to a nuanced, action‑oriented dialogue. Early remarks about which functions are most affected (software engineering) set the stage for deeper exploration of the value of AI beyond head‑count reduction. Ravi’s articulation of system‑level judgment and interdisciplinary fluency, coupled with Sue’s emphasis on turning anxiety into agency, shifted the focus toward concrete skill sets and the human dimension of change. Srikrishna’s contrasts between regulatory versus inclusive policy approaches, his realistic view of adoption speed, and his warning about concentration risk introduced strategic and systemic considerations that broadened the conversation. These comments prompted the panel to discuss concrete mechanisms—governance frameworks, role redesign, curriculum redesign, and interoperable credentialing—culminating in a shared call for inclusive, design‑first solutions. Collectively, the highlighted comments shaped the panel’s trajectory, turning abstract concerns into a roadmap for workforce transformation.

Follow-up Questions
What concrete data is needed to accurately assess AI’s impact on different service lines and overall workforce displacement?
Kish repeatedly emphasized the need to ‘look at the data’ to understand which jobs are affected, indicating a gap in empirical evidence that requires further research.
Speaker: Srikrishna Ramakarthikeyan
How can AI education and resources be made inclusive by design, ensuring free and open access for all learners?
He highlighted the importance of academia providing free AI tools to promote inclusivity, suggesting a need to explore models for open AI education.
Speaker: Srikrishna Ramakarthikeyan
What effective strategies can turn workforce anxiety about AI into agency and proactive upskilling?
Sue mentioned the challenge of anxiety and the need to convert it into agency, indicating a research area on behavioral interventions and program design.
Speaker: Sue Daley OBE
How can organizations ensure that new hires acquire the contextual knowledge traditionally gained through junior roles, especially when those roles are automated?
She raised concerns that AI‑driven automation may eliminate junior positions that provide essential industry context, calling for research on alternative learning pathways.
Speaker: Sue Daley OBE
What framework can enable interoperability of skills credentials and a national taxonomy of skills to support lifelong learning and mobility?
Sue highlighted the need for transferable credentials and a common language for skills, pointing to a gap in standardized credentialing systems.
Speaker: Sue Daley OBE
How can the concentration risk—where a few institutions or firms dominate AI data, compute, and talent—be mitigated to ensure equitable AI development?
Ravi warned about concentration risk and called for policies or mechanisms that democratize access to AI resources.
Speaker: Ravi Aurora
What practical models exist for role redesign versus simple reskilling in IT services to maximize AI benefits?
He discussed role redesign but indicated the need for concrete frameworks to guide organizations through this transition.
Speaker: Srikrishna Ramakarthikeyan
How can AI governance and interdisciplinary collaboration be embedded into university curricula across disciplines, not just computer science?
Ravi stressed the importance of integrating AI ethics, risk, and interdisciplinary fluency into higher‑education programs, suggesting a research agenda on curriculum design.
Speaker: Ravi Aurora
What investments and policies are required to build national data libraries, AI growth zones, and compute infrastructure to support AI research and industry?
She described UK initiatives on data and compute infrastructure, indicating a need for detailed studies on effective infrastructure models.
Speaker: Sue Daley OBE
How can equitable access to AI tools and training be ensured for MSMEs and last‑mile workers across diverse geographies?
Ravi highlighted gaps in digital capability among smaller organizations, calling for research on outreach and capacity‑building mechanisms.
Speaker: Ravi Aurora
What safeguards are needed to prevent over‑automation while maintaining adequate human oversight and accountability?
He warned about the risk of excessive automation without human checks, suggesting a need for governance frameworks and risk‑assessment studies.
Speaker: Ravi Aurora
How can AI models be designed to avoid bias and ensure contextual relevance for India’s multilingual, informal, and vernacular contexts?
Both speakers emphasized contextual awareness and bias mitigation, indicating a research priority on culturally and linguistically inclusive AI.
Speaker: Ravi Aurora; Sue Daley OBE
What strategies can accelerate AI adoption across all economic sectors (healthcare, agriculture, etc.) to drive growth and productivity?
She noted the importance of sector‑wide AI diffusion, pointing to a need for cross‑sector adoption roadmaps and impact assessments.
Speaker: Sue Daley OBE
How can a dynamic skills taxonomy be maintained to keep pace with rapid technological change?
Both referenced the challenge of fast‑moving skill requirements, suggesting research on agile taxonomy frameworks.
Speaker: Sangeeta Gupta; Sue Daley OBE
What will be the impact of AI on employment prospects for graduates from tier‑2 and tier‑3 colleges, and how can opportunities be preserved?
The panel discussed potential narrowing of opportunities for non‑elite institutions, indicating a need for inclusive talent‑development studies.
Speaker: Sangeeta Gupta; Srikrishna Ramakarthikeyan
How can coding professionals transition to code‑review and governance roles when AI can perform both coding and checking?
She raised the question of role evolution for coders, highlighting a gap in career pathway research.
Speaker: Sue Daley OBE
What competencies should AI governance professionals develop to move beyond rule enforcement to proactive risk management?
Ravi noted the need for governance staff who can manage risk proactively, suggesting a research area on skill sets and training.
Speaker: Ravi Aurora
How should AI literacy be integrated into school curricula to build foundational understanding from an early age?
She mentioned the importance of early AI education, indicating a need for curriculum development research.
Speaker: Sue Daley OBE
What metrics and methodologies can accurately track the speed and depth of AI adoption within enterprises?
He observed slow adoption and called for better measurement tools, pointing to a research gap.
Speaker: Srikrishna Ramakarthikeyan
What effective upskilling pathways exist for mid‑career professionals to transition into AI‑enabled roles?
She referenced conversion courses and reskilling programs, indicating a need for evaluation of such pathways.
Speaker: Sue Daley OBE

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.