HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

20 Feb 2026 13:00h - 14:00h

HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how heterogeneous compute and voice-first interfaces are reshaping AI adoption in India, noting that voice is the most natural UI and that AI experiences must remain consistent even with fluctuating network quality, which requires running inference locally on devices such as smartphones handling 10-billion-parameter models and glasses running sub-billion-parameter models [2-3][6-9][11-13].


Rizvi underscored the environmental stakes of AI, stressing finite energy resources while highlighting India’s vibrant ecosystem of 300 Gen-AI startups, sovereign large language models, and a strong application-layer focus, and he pointed to enterprise-scale bottlenecks in compute availability and connectivity that Cisco aims to address [14-19][22-24].


Arun Shetty identified three core impediments-insufficient infrastructure (power, compute, networking), security/safety of models, and data gaps-and argued that edge inferencing will become dominant, requiring fit-for-purpose solutions that combine edge, on-prem, and cloud resources while ensuring visibility to mitigate hallucinations, toxicity, and malicious tampering [43-48][49-53][54-58][60-62]. Dr. Kamakoti added that sovereign models are essential to thwart adversarial attacks, describing trust as a non-reflexive, non-symmetric, context-dependent relation that must be mathematically defined, and he emphasized that heterogeneous architectures are needed for dynamic threat detection such as advanced deep-packet inspection [51-55][56-63][64-66].


Gokul highlighted vertical-specific edge models that must overcome memory, I/O, thermal and power limits, noting India’s power-intensive data-center profile, the importance of improving PUE, using air-cooled racks where feasible, and adopting hybrid renewable/off-grid energy to support edge deployment that can “leapfrog” underserved regions while reducing total cost of ownership [67-73][74-78][78-82].


Durga concluded by advocating a holistic distribution of compute from devices through edge clouds to data centers, describing Qualcomm’s “hybrid AI” approach that leverages air-cooled carts for large models without always requiring liquid cooling, and the Minister reinforced that policy must secure power, water, land, and infrastructure to enable this distributed AI ecosystem, with welfare and happiness as the ultimate goals [90-98][99-103][139-145].


Overall, the discussion converged on the need for secure, energy-efficient, and heterogeneous AI infrastructure, backed by coordinated policy, to drive India’s next-generation digital transformation [89-95][101-103][139-145].


Keypoints

Major discussion points


Heterogeneous compute & edge inference are essential for a seamless AI experience.


Durga emphasized that AI should remain “invariant to the quality of the communications” by running inference on devices when possible and leveraging edge-cloud and data-center resources as needed [8-13]. Dr. Kamakoti later linked this need to “heterogeneous architecture” for dynamic malware detection [54]. Gokul reinforced the push toward edge inferencing to reach locations with limited connectivity [66-71].


Infrastructure constraints-power, compute, networking-must be addressed with fit-for-purpose solutions.


Arun Shetty listed the three impediments to AI adoption: power, compute, and networking, noting that “more inferencing happening at the edge” will shape future designs [43-48]. Gokul expanded on the power challenge, describing cooling limits, PUE targets, and the need for hybrid energy systems [70-78].


Security, safety, trust, and sovereign models are critical barriers.


Shetty highlighted model hallucinations, toxicity, and the need for visibility across the stack [52-55]. Dr. Kamakoti stressed the importance of “sovereign models” to prevent adversarial attacks and discussed the mathematical foundations of trust [52-60]. Later, Shetty detailed practical guardrails such as asset discovery, vulnerability scanning, and policy enforcement [118-130].


Environmental and energy-efficiency considerations underpin all technical choices.


Rizvi called out the “strong environmental aspect” of AI inference and the finite nature of energy [14-15]. Gokul quantified the power-to-cooling split in data centers, advocated for air-cooled racks where possible, and urged hybrid renewable/off-grid solutions to meet India’s growing demand [70-78].


Overall purpose / goal of the discussion


The panel aimed to map India’s AI landscape, identify the technical, infrastructural, security, and policy challenges of scaling generative AI, and outline coordinated actions-ranging from heterogeneous edge compute to sovereign model development and sustainable energy strategies-that enable responsible, enterprise-grade AI deployment across the country in the near-term (2-year horizon).


Tone of the discussion


Opening (Durga & Rizvi): Technical and forward-looking, highlighting opportunities of voice interfaces and heterogeneous compute.


Middle (Shetty, Dr. Kamakoti, Gokul): Becomes more urgent and problem-focused, stressing concrete constraints (power, security, data gaps) and the need for collaborative, fit-for-purpose solutions.


Closing (Rizvi, Minister): Shifts to a supportive, optimistic tone, emphasizing policy alignment, national welfare, and a collective commitment to “leapfrog” with AI while ensuring sustainability.


Overall, the conversation moved from an exploratory technical vision to a pragmatic roadmap anchored in security, energy, and policy considerations, ending on a hopeful note from the ministerial perspective.


Speakers

Durga (Durga Malladi) – Speaker; associated with Qualcomm AI initiatives and workshops [S2][S3].


Honorable Minister – Minister of State for Personnel, Minister of State for Personal Grievances and Pensions; involved in administrative reforms and India’s science & innovation agenda [S4][S5][S6].


Gokul (Gokul Subramaniam) – Speaker; contributor to discussions on heterogeneous compute and edge AI [S7].


Arun Shetty – Cisco representative; discusses infrastructure, security, and AI adoption challenges [S8].


Dr. Kamakoti (Prof. V. Kamakoti) – Professor; expert on AI security, sovereign models, and cybersecurity [S9].


Rizvi (Kazim Rizvi) – Panel moderator and speaker; focuses on environmental aspects of AI and policy [S10].


Additional speakers:


Mr. Vichetti – Mentioned by the Honorable Minister; role/title not specified in the transcript.


Sarah (Intel) – Referred to for handing over gifts; affiliation with Intel, role not specified.


Full session reportComprehensive analysis and detailed insights

Opening – Voice-first, multilingual AI (Durga) [1-8]


Durga opened by stressing that voice-first interfaces are strategically vital for India’s multilingual population, describing voice as “the most natural user interface to devices around you” and noting the need to support 14 languages for native-language interaction. He argued that AI should move away from continuous typing or texting toward on-device inference so that the experience remains “invariant to the quality of the communications.” To deliver this, Durga called for heterogeneous compute capable of running a 10-billion-parameter multimodal model on a smartphone and a sub-billion-parameter model on smart glasses, with seamless fallback to edge-cloud or data-centre resources when connectivity permits [1-8].


Environmental & ecosystem context (Rizvi) [9-15]


Rizvi shifted the discussion to the broader ecosystem, warning that energy is finite and that “efficiently managing the energy requirements” of AI inference is a critical, often overlooked concern. He highlighted India’s vibrant generative-AI startup scene-about 300 firms building on large language models-and the development of sovereign LLMs such as Sarvam to secure the application layer. Rizvi emphasized the importance of tackling enterprise-scale compute and network-connectivity bottlenecks, noting Cisco’s role in the ecosystem [9-15].


Cisco’s perspective – three impediments & fit-for-purpose architecture (Arun Shetty) [16-30]


Arun Shetty identified three core impediments to AI adoption: power, compute and networking. He projected national AI demand could reach 63 GW within a few years, observed tightening compute capacity, and argued that networking must evolve to support distributed workloads. Shetty advocated “fit-for-purpose” solutions that combine on-device, edge-cloud and on-premise resources, stating that more inferencing at the edge will reshape architecture. He positioned security and safety as an even larger challenge, calling for visibility across the stack to detect hallucinations, toxicity and malicious model manipulation. Shetty also stressed that Cisco cannot solve these problems alone and must work with ecosystem partners [16-30].


Security, sovereign models & formal trust (Dr Kamakoti) [31-45]


Dr Kamakoti expanded the security theme, insisting that sovereign models are essential to prevent adversarial attacks and data-poisoning. He introduced a formal notion of trust, describing it as “not reflexive, not symmetric, not transitive,” and emphasizing its context-dependence and temporal nature. He said a mathematical framework for trust must be built to underpin secure AI deployments. Kamakoti argued that heterogeneous architectures are required for dynamic deep-packet inspection and rapid malware-signature updates, because traditional signature-based inspection cannot keep pace with evolving threats. He also warned that models used in education must be carefully curated to avoid teaching harmful content [31-38][39-41][34-36].


Edge-compute constraints, cooling, PUE & hybrid energy (Gokul) [46-60]


Gokul focused on practical constraints of edge deployment. He explained that vertical-specific models must operate within tight limits of memory, I/O, thermal dissipation and power. Citing India’s data-centre power profile-where roughly 40 % of consumption goes to cooling-he advocated improving Power Usage Efficiency (PUE) by using air-cooled server carts where feasible and reserving liquid cooling for densities above ~25 kW per rack. Gokul argued that a hybrid energy mix of renewable, grid and off-grid sources is required, noting that “pure renewable may not be enough; a hybrid mix … is required” to sustain growth and enable “leap-frogging” of AI services to remote regions lacking reliable connectivity [46-55][55-57][56-60].


Closing visions – Hybrid AI, guardrails, policy (Durga, Arun Shetty, Minister) [61-78]


In the closing round-table, Durga reiterated a holistic approach that distributes compute across devices, edge-cloud and data-centres. He described Qualcomm’s “Hybrid AI” strategy, which leverages air-cooled carts for large models (100-300 billion parameters) without always requiring liquid cooling, thereby reducing reliance on a single monolithic data-centre [61-64]. Arun Shetty returned to security, outlining concrete guardrails for enterprises: systematic discovery of “shadow AI” assets, vulnerability scanning of models, and enforcement of policies that block transmission of confidential data to unauthorised third-party services. He referenced industry standards such as NIST, MITRE and OWASP to frame a “secure AI factory” capable of defending against malicious manipulation using AI-driven security tools [65-69][70-73]. The Honourable Minister concluded by linking the technical agenda to national policy, stressing that the government must ensure provision of power, water and land to support AI infrastructure and framing the ultimate goal as “welfare for all, happiness for all,” echoing the panel’s repeated emphasis on energy availability as a primary bottleneck [74-78].


Roadmap synthesis


Collectively, the panel agreed on a near-term roadmap (2-4 years) that couples distributed, energy-aware compute with robust, trust-centric security and sovereign data governance to enable responsible, enterprise-grade AI deployment in India. [61-78]


Session transcriptComplete transcript of the session
Durga

with them. 14 languages. Voice is the most natural user interface to devices around you. So the idea is not to actually keep typing and texting, but it’s about the usage of voice, but in native languages, which actually work very nicely. And that means that you have to make sure that the use cases are built on top of it. So that’s what our focus is from a processor standpoint. One final note, and given that I have maybe just one minute, another aspect of heterogeneous computers, disaggregation of compute within the network itself. What I mean by that is, at some point in time, you might have extremely good connectivity to the network. And at some other point in time, you might have zero connectivity to the network.

And the question to ask is, do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? Or do you want it to depend on it? Obviously, you want it to be invariant. That means you must have the ability to run inference directly on devices. Not that you want to do it all the time, but when you can, why not? today we can run up to a 10 billion parameter model multimodal model state of the art on a smartphone and a sub 1 billion parameter model in your glasses without necessarily charging a device the whole day it’s once every 24 hours so we’ve come a long way in that which means use the data centers use the edge cloud as and when necessary they have a role to play at the same time make sure that we also build for devices where the inference actually occurs and users directly perceive that’s where the data originates so it’s important to think about it that way

Rizvi

yeah there’s there’s also very strong environmental aspect to this and which often gets unnoticed and undiscussed but that element is also very important in terms of efficiently managing the energy requirements because energy as we also know is finite and so I think you one thing which I was struck to me which is spoke what was inferences and the other is that it’s not just about the energy but it’s also about the energy and the A lot of what’s happening in India is also around inferencing models, right? So, I mean, in terms of the Gen AI story, which we have, we have almost 300 Gen AI startups, which are building on top of the large language models.

And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with Sarvam and others, we are also building sovereign large language models, right? So, we are sort of, as Minister Vaishnav has spoken about, every, you know, piece of the puzzles. We are there in terms of fitting that puzzle together. I’d like to come to Mr. Arun Shetty, sir, is with Cisco. And, you know, we just want to take it further from where Durga sir had left in terms of talking about enterprise adoption at scale. And, you know, of course, with Cisco, what are the challenge of bottlenecks, which you see in terms of computer availability, connectivity, which Cisco is trying to do, which you see in generally.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And

Arun Shetty

Yeah, so as you know, we connect and protect the… This should be working, right? Yeah, yeah, yeah. As you know, we connect and protect even in the AI era, right? We started in the internet, we came into the cloud, and we are in this era. First of all, thank you very much for having me, and it’s indeed a pleasure to be representing this esteemed panel. So I think what I’ll do is I’ll summarize based on what others have spoken, actually, and I think those are real problems. The first one is clearly the three impediments for AI adoption is one is clearly infrastructure constraints, and we all spoke about it, and they all spoke about it.

The first one is the power. power is a challenge will be a challenge i think usc is expecting it will be 63 gigawatts of power in couple of years what they require okay and then the compute is a problem we did recognize that compute is becoming a problem and then uh kamakoti sir did tell that cisco is in networking what are you doing in networking and networking will be a problem actually and then we need to see how we need to address and clearly it has to be a fit for purpose solutions because you not only do huge data centers and i think what we see is in couple of years you will see there is more inferencing happening at the edge and that’s what we need that’s what the how the world will move and that’s why solutions have to be fit for purpose for sure the second bigger challenge what we have is the security and the safety aspect so that is something what we need to pay lot of attention because as the adage says what if you can’t see you can’t trust right you can’t trust something what you can’t see so you need to have the visibility across the stack and also you need to see whether the models what we are using are the right models for us or is there anything malicious into the models itself actually vulnerabilities in that model so the security aspect becomes where security and safety aspect becomes very very important because the models hallucinate you can inject toxicity into the model so those are the challenges what we need to address as far as what we use so i think it is very very important to build our models and if you look at the models all the models were built using the public data which was the text voice and video data so but however the enterprises the government has the best data sets so why can’t we use those data sets so the third impediment what we have today is the data set so the third impediment what we have today is the data set so the third impediment is the data gap and data gap is essentially i need to have high quality accessible and manageable data and we can build gpts using that what we can call it as a machine gpt what we can build using that use that for inferencing use that for training use that for inferencing and we get a lot of quality use of ai without data the which is the fuel for the ai today you can’t really move forward on the ai and i think these are the typical three problems and the ways we are looking at addressing this is clearly one is i will not be able to build a huge data center for a specific use case so take a use case and then see how fast i can give that infrastructure a comprehensive secure ai factory or a secure infrastructure whether it is in the data center or in the edge actually so that people can focus on building the use cases or the applications on top of it and the second thing comes on the safety and the security aspect of it and how we can do the defense mechanism and the third one is the data so these are the three problems what cisco is trying to address along with the ecosystem partners of course because this is not a problem what you can solve alone actually yeah thank you

Rizvi

yeah i think i don’t know if my mic okay it’s okay yeah and i’ll i’ll sort of take from the security point which you have spoken and i’ll come back to dr kamakoti i think we have on the clock it shows seven but on my watch it shows 15 yeah so i’ll go by my watch uh yeah so dr kamakoti would like to focus on critical infra and public systems here and as you know that as with the advent of ai we’re going to use it across these sectors as well so how important do you see heterogeneous compute in terms of contributing to national resilience to safeguard and to sort of you know ensure that our critical infrastructure public systems are secure as well

Dr. Kamakoti

So today, the type of things that we need to do for each one of these actions, the type of inferencing, type of response time we need, as Shetty mentioned, it’s going to be different. I hope all of you have seen Yes Prime Minister, and always they say, need to know, right? You need to know, right? And now what happens is if I am going to make a model that has understood the entire data, then this that the model, and it is used to be someone that someone should they need to know that data? That’s a very important question. So that’s where the entire aspect of cybersecurity comes in. And that’s why we are all saying that we have need to have sovereign models.

As he rightly pointed out, we can have adversarial AI, we can go poison the whole thing and then make it teach make it tell the things that, you know, should not be told, or need not be told. Okay. This is something that we need to very much look at from a security point where i do an inferencing and my training data set goes for a toss number one so we need to have something for for education at least as a director of one of the premium students in the country what my worry is that for education like how we have since our board for uh you know movies what we should make models for which certain details alone should be fed into it see is a bacha right whatever you teach what it will tell you back probably do a little more uh generative on that so this is number one number two is again coming back to cisco itself right you do deep packet inspection and basically you do it with some signatures today the the whole story is changing dynamically the malware can change its signature so that’s going to be the biggest challenge now and what sort of inferencing they are going to do they have to bring some more different architecture and that will be a heterogeneous architecture now and so so So, ultimately, you know, as you see, you know, what you see, the trust component, I always repeat this, I’ll finish with this with my one minute.

So, trust is, you know, friends, you know, if you want to define A is equivalent to B, that’s the definition, right? If you want to define A, you have to come with B, which is equivalent to A. So, equivalence in discrete mathematics, equivalence relation should satisfy three properties, reflexive, symmetric, transitive. A is trust is not reflexive, I don’t trust myself sometimes. Trust is not symmetric, I trust Sarah, Sarah may not trust me. Trust is not transitive, I trust Gokul, Gokul trust you, I may not trust you. Trust is in addition, trust is context dependent, I trust. I trust you on something, I don’t trust you on something else. It is temporal, morning I trust you, evening I don’t trust you.

So, right? So, the main thing is, we have to build that mathematics. defined trusted and if you go to you know some of these search engine and define trust you get 1 million hits for that so so that is going to be the most important part so specifically on heterogeneous we will have certain different types of security issues something which a can sound something which is originating because of a and that’s where all of us edge connectivity server all the three people have to work together and and we will teach and he’ll put policy so

Rizvi

but both of you are equally playing an important role in terms of policy dr. Kamothi you’re also you know very influential and important figure in India’s AI policies of course lots to learn from you Goku very quickly would like to come to you and you know just sort of taking away in terms of the practical deployment models and what are the sort of examples you’ve seen which demonstrate that we are moving towards heterogeneous compute right and what needs to be done to also get get to that

Gokul

So I started off with workload and I’ll go back to the same thing. So one of the things that we’re looking at and it’s critical is to see what vertical really needs what kind of domain specific models. And then try to apply that as much as possible as edge inferencing and contain the walls that are there that prevents AI to work efficiently. Primarily it’s like memory, you know, the connectivity, the IO, the thermal and then the power. So from an edge inferencing standpoint, there are quite a few things that are being done, be it an education segment where you want more translation, data being available, transcription. So that the knowledge is being imparted in a way that you have with the right data with the lowest power that’s meaningful for the student.

And more importantly, when we talk security, it’s not only about protecting data. the models we keep talking data and models it’s protecting the user that’s even more fundamental and how you can ensure that that happens second thing is applying it to other verticals be it small and medium business i think there is a great opportunity there where edge inferencing and putting compute with the right kind of power that can translate the businesses into actually using ai more effectively the last aspect that i want to also touch upon is in terms of just power you know as we go from one gig to nine to ten gig in the next five years in the country we have to realize that india is challenged by three physical things that we cannot run away from land water and power and these are very important aspects that it will drive how we set up our infrastructure and you know almost you know in a hundred percent of your power energy that comes into a data center forty percent goes into cooling forty percent into your computer and twenty percent on connectivity and there is this famous metric that you use, the PUE, the power usage efficiency.

It has to be as close to one as possible. All the power that you give goes to the most important thing, which is the computer, not to the cooling and things. And there are a lot of technologies that are being played with with respect to how much you can air cool on a rack, per rack, and that was okay up to about 25 kilowatt, and as you start to get to 100, you have to use liquid cooling, and then how we can set that infrastructure up. And for a country like India, it’s absolutely important to look at what hybrid energy solutions we can go with, because just pure renewable may not be able to address it. You’ll have to have something that is stable and be able to do something off -grid so that there is that dependency for you to get the data from the data centers and push as much as possible to edge, because edge is all about reach.

How can I take it to places across the country where there is no access to connectivity? It’s about how can I leapfrog? How can I leapfrog with verticals that have not used technology as much? We’ve always done a leapfrogging in India, and this is a great moment for us, and total cost of ownership. Those are the big areas.

Rizvi

Thank you, Gokul. And I think as we are approaching the end of the panel, I’d sort of like to go to Durga and Dr. Shetty also in terms of closing remarks and the way forward. So to both of you, I’ll pose this question in terms of the next two to four years, because I think the AI age, we don’t think too far ahead. We can’t do five -year planning or 10 -year planning. I think two -year planning is sufficient. So what enterprise outcomes are you both looking at? Maybe we can start with Durga in terms of defining India’s access to compute, access to infrastructure, capacity, and also sort of building in scale, cost efficiency and energy efficiency.

Durga

So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the world as well, where the problems are somewhat similar, is the ability to distribute compute across the entire network. So think of a combination of inference that runs in devices to the largest… extent that’s possible. Edge cloud, on -prem servers, where a lot of the localized processing can be done. And these can be done in air -cooled carts, by the way. The point that was made earlier is absolutely relevant. You don’t necessarily need liquid cooling all the time. You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter models, which are getting pretty sophisticated.

That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the overall requirements of what you need in a data center. And instead of, therefore, concentrating the entire compute in one single location and then building it for just that alone, a holistic approach of devices, edge cloud, plus data center is probably what we are looking forward to. From Qualcomm, we call it as hybrid AI. It’s not just a marketing slogan, but it is something that we truly believe in. Thank you.

Arun Shetty

Since the infrastructure part has been addressed here, so let me talk. A little bit more on safety and security aspects. So I think one of the things what we need to understand about the modern… these models are very intricate and very complex. And it’s also non -deterministic because if you give an input, not necessarily the output will be the same like a standard application, correct? So that’s why it is non -deterministic. So what one should be doing, right? There are two aspects of safety and security. I’ll just touch upon why it is important to know that actually. Safety is all about, we want the models to work in a certain way but it is not working in that certain way or the way we want them to work.

That is the first part of it. That’s where the toxicity part, hallucination, all those challenges come actually. The second part of it is the security part wherein a bad actor from outside can change the behavior of the model. So we need to be careful about both the things actually. So what one should be doing? Say for example, I think Kamakoti sir also told about users to have, that’s it. users also to be secure, right? So it is essential that the organizations or the country has to build that actually. So which means if I’m accessing a chat GPT and sending some confidential info, the system should stop me. So that is the when I’m accessing a third party application, the system should be smart enough to stop me saying that you can’t be sharing that information that’s not allowed for you to share that.

So that’s something which is already happening in organizations today. The second part of it is the first party application, I’m building an application, and I’m using a model. So now the organization should be able to scan what all my AI assets are. Because one of the biggest challenges for enterprise is the shadow AI applications, they don’t know what people are doing actually. So I need to clearly know what all my assets are. That is number one, I detect all my assets or discover all my assets. And next is I should scan. and also ensure that these models and the applications what I’m using are not vulnerable. If it is vulnerable, then I need to put guardrails around it or I need to fix those problems.

And similarly, there are organizations who are already telling that there are a lot of risks. So you need to nist Mitre and OWASP are telling that there are a lot of risks associated with that and we need to ensure that we need to stop that. So that is something what Cisco is focus, our focus to see how we can use AI to defend the, to defend against all these malice and also the vulnerabilities what we see. Thank you so much.

Rizvi

I think with this, we’ll probably close the panel, but I’d like to invite Honorable Minister once again for his very quick closing remarks that you have sort of. Thank you. us highly motivated to sort of build on this. You’ve heard us in the last one hour. What are your thoughts? We’d love to hear from you in terms of your closing address.

Honorable Minister

Thank you, Rizvi. And in fact, it’s a great pleasure to be here with the eminent Padmasree Awadi, Professor Kamakoti and Gokul and Durga Prasad and Mr. Vichetti sharing their truly professional experience and how as a policymaker, how we should view the things especially in terms of power, electricity, water and the land. How we should be well equipped to provide all these things where all the eminent panelists over here or the eminent people of the days would be thinking of putting. My primary challenge they have posed before is try to provide all these things. We are here to provide the rest remaining. And in fact, you know, thanks once again for a very apt introduction. very apt dialogue over here.

Ultimately, we have to all, me as a policymaker, and you all technocrats and innovators have to think the basic agenda for this AI impact term is welfare for all, happiness for all. Thank you for inviting me. Thank you so much.

Rizvi

With this, we will have to close the panel. I’d like to thank all our panelists and also invite colleagues, Sarah from Intel to hand over the gifts. But we’ll just have a group photo. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (27)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Durga stressed that voice‑first interfaces are strategically vital for India’s multilingual population and that voice is the most natural way for users to interact with devices, requiring support for native languages.”

The knowledge base notes that Durga emphasizes voice as the most natural interaction method and the importance of supporting native languages for India’s linguistic diversity [S2].

Additional Contextmedium

“Durga highlighted the need for heterogeneous compute capable of running a 10‑billion‑parameter multimodal model on a smartphone and a sub‑billion‑parameter model on smart glasses, with fallback to edge‑cloud or data‑centre resources.”

While the knowledge base discusses the broader goal of distributing inference across devices and the network, it does not specify model sizes; it adds context about the overall strategy of edge-to-cloud compute distribution [S11].

Confirmedmedium

“Rizvi mentioned sovereign LLMs such as Sarvam being developed in India to secure the application layer.”

Sarvam is referenced as an Indian sovereign model that aims to demonstrate world-class capabilities built locally [S93].

Confirmedhigh

“Arun Shetty identified power, compute and networking as the three core impediments to AI adoption in India and warned that national AI demand could reach 63 GW within a few years.”

A Cisco executive is quoted in the knowledge base as stating that there is insufficient power, compute and network bandwidth globally, aligning with the three impediments described [S65].

Additional Contextmedium

“Shetty advocated “fit‑for‑purpose” solutions that combine on‑device, edge‑cloud and on‑premise resources for AI inference.”

The knowledge base highlights a vision of distributing compute across the entire network, supporting a hybrid on-device and edge approach, which adds nuance to the fit-for-purpose architecture concept [S11].

External Sources (102)
S1
Subrata K. Mitra Jivanta Schottli Markus Pauli — In this chapter, we look at major foreign policy events during Indira Gandhi’s terms as prime minister (1966-…
S2
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Durga Malladi- Gokul Subramaniam
S3
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — – Praveer Kochhar- Durga Maladi – Ritukar Vijay- Durga Maladi
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S5
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S6
Announcement of New Delhi Frontier AI Commitments — -Shri Ashwini Vaishnaw: Role/Title: Honorable Minister for Electronics and Information Technology, Area of expertise: El…
S7
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Arun Shetty- Gokul Subramaniam – Gokul Subramaniam- Arun Shetty
S8
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Arun Shetty- Gokul Subramaniam – Durga Malladi- Arun Shetty – Gokul Subramaniam- Arun Shetty
S9
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Gokul Subramaniam- Arun Shetty- Prof. V. Kamakoti – Prof. V. Kamakoti- Arun Shetty
S10
S11
https://dig.watch/event/india-ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the wor…
S12
Designing Indias Digital Future AI at the Core 6G at the Edge — I am part of the 6G use case group, work very closely with Shokji and I think many things are already in place. We draft…
S13
Inclusive AI_ Why Linguistic Diversity Matters — “So this is our prototype open AI inference device”[44]. “The hope is that anyone could feel empowered to connect up to …
S14
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastruc…
S15
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Sharma identifies compute resources and research talent as the main barriers, suggesting regulatory issues are less sign…
S16
Agents of Change AI for Government Services & Climate Resilience — “…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately…
S17
Press Conference: Closing the AI Access Gap — Access to data in private sector can be useful to public sector researchers and social entrepreneurs The governance, al…
S18
National Strategy for Artificial Intelligence — Data is vital to AI. Today vast datasets are generated from many different sources. AI and machine learning can use this…
S19
Leveraging AI4All_ Pathways to Inclusion — -Multi-layered Access Challenges in AI Implementation: The discussion emphasized that good technology alone doesn’t auto…
S20
Information technology – Cloud computing – Edge computing landscape — All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publi…
S21
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — The same thing will apply. There will be open models and closed models. But the way I like to think about it is models i…
S22
From KW to GW Scaling the Infrastructure of the Global AI Economy — PUE optimization should focus on thermal and load cycle management rather than simple temperature adjustments that may i…
S23
Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112 — Kazim Rizvi:I hope I’m audible. Thank you to the chair, thank you to GIGANET and IGF for hosting us today in Kyoto on a …
S24
Democratizing AI Building Trustworthy Systems for Everyone — The historical perspective on technology diffusion offers both hope and urgency: success requires deliberate action acro…
S25
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Poort raised concerns about potential political pushback: “Strong pushback against AI regulation may affect AI policy re…
S26
Empowering People with Digital Public Infrastructure — There’s a need to address power and infrastructure requirements in developing regions to enable DPI implementation. This…
S27
WS #305 Financing Self Sustaining Community Connectivity Solutions — Power management and renewable energy access are critical infrastructure needs that must be addressed alongside connecti…
S28
WS #204 Closing Digital Divides by Universal Access Acceptance — Speakers demonstrated strong consensus on fundamental principles: infrastructure alone is insufficient, language barrier…
S29
WS #102 Harmonising approaches for data free flow with trust — Dave Pendle: Yeah, thanks, Saman. Thanks for having me and good morning to everyone. My name is Dave Pendle. I’m an …
S30
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — High level of consensus on core principles with strong implications for cybersecurity governance. The agreement suggests…
S31
Main Session on Sustainability & Environment | IGF 2023 — David Souter:Okay, so one of the problems here is that for every individual actor in an environmental context, every ind…
S32
Presentation of outcomes to the plenary — This aligns with SDGs 13 and 14, which call for climate action and the conservation of marine life. Overall, the compreh…
S33
Creating Eco-friendly Policy System for Emerging Technology — Decision making should be based on evidence. The analysis robustly endorses the adoption of a comprehensive, multifacet…
S34
Chapter 1 General Provisions — 1. In pursuit of sustainable development and taking into account its obligations under those international agreements co…
S35
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — The discussion maintained a formal, academic tone throughout, characteristic of a research presentation or conference se…
S36
Building Indias Digital and Industrial Future with AI — The discussion maintained a collaborative and forward-looking tone throughout, with industry experts, regulators, and po…
S37
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Arun Shetty made a crucial distinction between safety and security concerns in AI systems. Safety issues involve models …
S38
Building the Next Wave of AI_ Responsible Frameworks & Standards — Bhattacharya explained that trust ranks first among Salesforce’s five core values—trust, customer success, innovation, e…
S39
UNSC meeting: Artificial intelligence, peace and security — Gabon:Thank you, Madam President. I thank the United Kingdom for organizing this debate on artificial intelligence at a …
S40
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordinat…
S41
AI as critical infrastructure for continuity in public services — The discussion revealed relatively low levels of direct disagreement, with most speakers focusing on different aspects o…
S42
WS #344 Multistakeholder Perspectives WSis+20 the Technical Layer — – **Community-Driven Solutions**: Technical governance challenges are best addressed through community processes rather …
S43
Open Forum #18 Digital Cooperation for Development Ungis in Action — Low level of disagreement with high collaborative spirit. The main differences are tactical rather than strategic, focus…
S44
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — A significant gap remains between high-level policy requirements and practical technical implementation. Whilst basic IT…
S45
From principles to practice: Governing advanced AI in action — Juha Heikkila: Thank you. Thank you very much. It’s indeed a great pleasure to be here and to be a member of this panel….
S46
The AI Governance Alliance of the World Economic Forum unveiled the Presidio AI Framework — The AI Governance Alliance of the World Economic Forum (WEF) unveiled the ‘Presidio AI Framework’ as part of its AI Gove…
S47
How nonprofits are using AI-based innovations to scale their impact — The practical implementation of these principles included the development of guardrails and evaluation frameworks. Parti…
S48
Agentic AI in Focus Opportunities Risks and Governance — A grassroots, industry‑driven approach is needed to create practical standards and guardrails rather than top‑down manda…
S49
Main Session on Sustainability & Environment | IGF 2023 — David Souter:To firstly reinforce Chris’s point about the NRIs, the UK IGF had a major focus a couple of years ago on en…
S50
Building Climate-Resilient Systems with AI — The discussion shows remarkably high consensus among speakers, with only minor disagreements on implementation approache…
S51
World Economic Forum 2025 at Davos — Overall, the discussions at the WEF 2025 Davos highlighted a prevailing sense of optimism, especially in areas of techno…
S52
Opening Ceremony — These key comments shaped the discussion by establishing a tension between technological optimism and critical analysis …
S53
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S54
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S55
Climate change and Technology implementation | IGF 2023 WS #570 — Furthermore, the worldwide Cloud infrastructure for apps adds to the energy demands. Cloud servers, responsible for host…
S56
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 35. The first and most meaningful observation that should be highlighted is that, despite general agreement on the princ…
S57
Shaping the Future AI Strategies for Jobs and Economic Development — “The thing that I disagree with Mr. Khosla about is that, again, I mentioned the energy blind spot and the cooling blind…
S58
Empowering People with Digital Public Infrastructure — 1. Power and energy requirements: Rodrigo Liang emphasized that the primary challenge in scaling artificial intelligence…
S59
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S60
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S61
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Durga argues that AI applications …
S62
The fundamentals of AI — AI is no longer a concept confined to research laboratories or science fiction novels. From smartphones that recognise f…
S63
Democratizing AI Building Trustworthy Systems for Everyone — What’s the energy consumption? Are there simpler, lower parameter, lower energy consuming models rather than the giant m…
S64
WS #305 Financing Self Sustaining Community Connectivity Solutions — Power management and renewable energy access are critical infrastructure needs that must be addressed alongside connecti…
S65
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Infrastructure constraint: insufficient power, compute, network bandwidth, memory, and data center capacity globally – I…
S66
Empowering People with Digital Public Infrastructure — There’s a need to address power and infrastructure requirements in developing regions to enable DPI implementation. This…
S67
WSIS Forum 2017: Summary of Day 5 — Issues oftrust, safety, and securitycan become barriers to achieving sustainable development through ICTs. Violations of…
S68
WS #204 Closing Digital Divides by Universal Access Acceptance — Speakers demonstrated strong consensus on fundamental principles: infrastructure alone is insufficient, language barrier…
S69
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of…
S70
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-kiran-mazumdar-shaw — Deep science requires a lot of research and development. It requires patient capital. But the societal and economic retu…
S71
The Geoeconomics of Energy and Materials/ DAVOS 2025 — Challenges and Opportunities in the Energy Transition She mentions that when faced with the choice between reducing car…
S72
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — ## Environmental Sustainability and Energy Considerations
S73
Resilient infrastructure for a sustainable world — Frisch explains that sustainability has become increasingly important at CERN, especially when designing machines for ce…
S74
Chapter 1 General Provisions — 1. In pursuit of sustainable development and taking into account its obligations under those international agreements co…
S75
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — A particularly encouraging theme throughout the discussion was the natural alignment of commercial incentives with susta…
S76
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — “It can deal with multilinguality and voice.”[51]. “There’s firstly a lot of opportunity to bridge some of these inequit…
S77
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S78
Opening of the session — Continued dialogue on emerging technologies needed in future permanent mechanism
S79
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — This framing influenced subsequent speakers to emphasize concrete, practical solutions rather than theoretical benefits….
S80
Indias AI Leap Policy to Practice with AIP2 — Dr. Panneerselvam Madanagopal The concept of ‘technology overshoot’ provides a framework for understanding why AI adopt…
S81
The Global Power Shift India’s Rise in AI & Semiconductors — And we bring together engineering talent, silicon design strength, and a growing ecosystem of system and infrastructure …
S82
AI and Data Driving India’s Energy Transformation for Climate Solutions — The emphasis on moving from pilots to permanent solutions reflects a broader maturation in the climate-tech space, where…
S83
https://dig.watch/event/india-ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the ov…
S84
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S85
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S86
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S87
Indias Roadmap to an AGI-Enabled Future — The discussion maintained an optimistic and ambitious tone throughout, with speakers expressing confidence in India’s ab…
S88
Open Forum #36 Challenges & Opportunities for a Multilingual Internet — – Pradeep Kumar Verma: Scientist D, Government of India’s Ministry of Electronics and IT 1. India: Pradeep Kumar Verma …
S89
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Amandeep Singh Gil: Thank you. And thank you to you and to Abhishek for getting us together. I think there’s strong mome…
S90
AI Development Beyond Scaling: Panel Discussion Report — Eric Xing described his work building foundation models from scratch and outlined different levels of intelligence: text…
S91
Conversational AI in low income & resource settings | IGF 2023 — Ashish Atreja:Happy to. I think one of the critical things is I think it’s the onus is on us. There is a very famous map…
S92
Cooperation for a Green Digital Future | IGF 2023 — Furthermore, a human-centric approach to the digital transition is vital, as it ensures that the adoption of digital tec…
S93
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And I want this. The most important thing that I want people to understand is… just because, and I think that the, you…
S94
Overcoming the Global Digital Divide? The South-Based RIRs | IGF 2023 — Overall, the analysis highlights the importance of financial support, collaboration, inclusion, and adaptation in streng…
S95
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — It focuses on ensuring the nation’s critical infrastructure is protected by a qualified cadre of cybersecurity specialis…
S96
What is it about AI that we need to regulate? — Beyond physical infrastructure, the discussions highlighted the need for comprehensive ecosystem development. InWS #231,…
S97
Omnipresent Smart Wireless: Deploying Future Networks at Scale — Bocar A. BA.:I think it’s a great opportunity, and everybody in the room worldwide can attest that we have lived the pan…
S98
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — But then this technology, the compute networks, as well as the AI platform stack, comes together in edge devices. Robots…
S99
Main Session | Policy Network on Artificial Intelligence — Muta Asguni: Thank you so much, Serena. Really happy to be here with you guys on this session at IGF. I think there i…
S100
Trusted Connections_ Ethical AI in Telecom & 6G Networks — The panel discussion featured distinguished experts from major telecommunications equipment manufacturers, each offering…
S101
The Innovation Beneath AI: The US-India Partnership powering the AI Era — This comment fundamentally altered the discussion’s trajectory, causing multiple panelists to reconsider their assumptio…
S102
AI for Good Technology That Empowers People — But with AGI, we don’t have to worry about that. Apart from that, I do want to touch on one thing. That is Qualcomm, one…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Durga
2 arguments197 words per minute538 words163 seconds
Argument 1
Device‑centric inference & hybrid AI (Durga)
EXPLANATION
Durga emphasizes that AI inference should run directly on end‑user devices whenever connectivity permits, reducing reliance on distant data centres. She frames this as a hybrid AI approach that combines on‑device, edge‑cloud, and centralised compute to deliver seamless user experiences.
EVIDENCE
Durga notes that to keep AI experiences invariant to network quality, inference must be possible on devices, citing the ability to run a 10 billion-parameter multimodal model on a smartphone and a sub-1 billion-parameter model on glasses without continuous charging [12-13]. She later expands the vision, describing a distribution of compute across devices, edge-cloud, on-prem servers, and data centres, using air-cooled carts and supporting large models up to 300 billion parameters, which she calls “hybrid AI” [90-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for inference on end-user devices and a hybrid AI fabric is highlighted in discussions about distributing compute across the network and running models locally on offline devices [S2][S13][S12].
MAJOR DISCUSSION POINT
Device‑centric inference
DISAGREED WITH
Gokul
Argument 2
Distributed compute across devices, edge cloud and data centers (Durga)
EXPLANATION
Durga proposes a holistic architecture where compute is spread from the edge (on‑device inference) through edge‑cloud resources to traditional data centres, avoiding concentration of all processing in a single location. This approach aims to improve latency, resilience, and cost‑efficiency.
EVIDENCE
She outlines the layered model: inference on devices, edge-cloud and on-prem servers for localized processing, followed by larger data-centre resources, arguing that this reduces overall data-centre demand and enables flexible scaling [90-100]. She also mentions the use of air-cooled carts for edge deployments, reinforcing the practicality of the approach [93-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources describe a layered architecture that moves compute from edge devices to edge-cloud and finally to data centres, reducing concentration of resources [S2][S11][S12].
MAJOR DISCUSSION POINT
Distributed compute architecture
AGREED WITH
Arun Shetty, Gokul, Dr. Kamakoti, Rizvi
A
Arun Shetty
5 arguments179 words per minute1219 words407 seconds
Argument 1
Fit‑for‑purpose edge solutions to reduce data‑center load (Arun Shetty)
EXPLANATION
Arun stresses that edge solutions must be tailored to specific use‑cases to avoid over‑provisioning data‑centre capacity. By deploying appropriate compute at the edge, the burden on centralised infrastructure can be alleviated.
EVIDENCE
He describes the three impediments to AI adoption-power, compute, and networking-and argues that “fit-for-purpose solutions” are needed, especially as more inference moves to the edge in the coming years [43-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge solutions tailored to specific use-cases are advocated as a way to offload work from central data centres [S2][S12].
MAJOR DISCUSSION POINT
Fit‑for‑purpose edge solutions
AGREED WITH
Durga, Gokul, Dr. Kamakoti, Rizvi
Argument 2
Power as a primary impediment; demand for sustainable compute (Arun Shetty)
EXPLANATION
Arun identifies power availability as a critical barrier to scaling AI, noting projected national demand of 63 GW in the near future. Sustainable compute must therefore address power constraints alongside other resources.
EVIDENCE
He explicitly states that power is a challenge, with expectations of 63 GW of power requirement in a couple of years, highlighting the magnitude of the issue [43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Power and cooling demands of AI workloads are described as a major barrier, with projections of tens of gigawatts and emphasis on efficient, hybrid energy strategies [S14][S2][S22].
MAJOR DISCUSSION POINT
Power constraint as impediment
AGREED WITH
Gokul, Rizvi, Honorable Minister
Argument 3
Model hallucination, toxicity and need for guardrails (Arun Shetty)
EXPLANATION
Arun points out that AI models can produce harmful outputs such as hallucinations or toxic content, and therefore require safety mechanisms and guardrails to protect users and organisations.
EVIDENCE
He explains that safety concerns include hallucination, toxicity, and that security concerns involve malicious actors altering model behaviour, urging the need for protective measures and monitoring of AI assets [112-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hallucination and toxicity risks are noted, together with calls for governance, auditability and guardrails to mitigate them [S2][S16].
MAJOR DISCUSSION POINT
Safety and guardrails for AI models
AGREED WITH
Dr. Kamakoti, Rizvi, Arun Shetty (later)
DISAGREED WITH
Dr. Kamakoti
Argument 4
High‑quality, accessible data from enterprises/government to fuel AI (Arun Shetty)
EXPLANATION
Arun argues that the current data gap can be closed by leveraging high‑quality, enterprise and government data sets, which are richer than publicly available data, to train more effective AI models.
EVIDENCE
He notes that most models are built on public text, voice, and video data, but enterprises and governments possess superior data sets that could be used to create “machine GPTs” for training and inference, addressing the data fuel problem [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of leveraging private-sector and government datasets to improve AI model performance is emphasized in policy discussions [S17][S18].
MAJOR DISCUSSION POINT
Data gap and sovereign data
AGREED WITH
Rizvi, Dr. Kamakoti
Argument 5
Three impediments to AI adoption: power, compute, networking (Arun Shetty)
EXPLANATION
Arun summarises the primary barriers to AI adoption as insufficient power, limited compute resources, and networking constraints, each requiring targeted solutions for scalable deployment.
EVIDENCE
He lists the three impediments explicitly-power, compute, and networking-and references earlier speakers who highlighted these challenges, reinforcing their centrality to AI rollout [43-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A triad of constraints-power, compute capacity and networking bandwidth-is identified as the core bottleneck for AI scaling [S2][S14].
MAJOR DISCUSSION POINT
Key AI adoption barriers
G
Gokul
2 arguments186 words per minute572 words183 seconds
Argument 1
Vertical‑specific edge models; memory, connectivity, thermal limits (Gokul)
EXPLANATION
Gokul stresses that different industry verticals require specialised edge models that respect constraints such as memory, connectivity, thermal budgets, and power availability. Tailoring models to these limits enables effective edge inference.
EVIDENCE
He describes the need to identify domain-specific models for each vertical and lists the primary constraints-memory, connectivity, I/O, thermal, and power-that must be managed for successful edge deployment [68-70].
MAJOR DISCUSSION POINT
Vertical‑specific edge constraints
AGREED WITH
Durga, Arun Shetty, Dr. Kamakoti, Rizvi
Argument 2
Power, cooling, PUE and hybrid energy strategies for scalable AI (Gokul)
EXPLANATION
Gokul highlights that AI scalability hinges on efficient power usage, effective cooling, and hybrid energy solutions, emphasizing metrics like Power Usage Effectiveness (PUE) and the shift from air to liquid cooling at higher power densities.
EVIDENCE
He explains that a typical data centre allocates 40 % of power to cooling and stresses the importance of keeping PUE close to one, discusses air-cooling limits (≈25 kW per rack) and the need for liquid cooling beyond 100 kW, and advocates hybrid renewable-plus-stable energy sources for Indian contexts [71-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Energy consumption, cooling overhead (≈40 % of power) and the need for low PUE through hybrid cooling solutions are discussed as essential for scalable AI infrastructure [S14][S2][S22].
MAJOR DISCUSSION POINT
Energy efficiency and cooling strategies
AGREED WITH
Arun Shetty, Rizvi, Honorable Minister
DISAGREED WITH
Durga
R
Rizvi
2 arguments183 words per minute839 words275 seconds
Argument 1
Finite energy & need for efficient AI energy management (Rizvi)
EXPLANATION
Rizvi draws attention to the environmental dimension of AI, noting that energy is a finite resource and that AI workloads must be managed efficiently to minimise waste and carbon impact.
EVIDENCE
He remarks that there is a strong, often overlooked environmental aspect to AI, emphasizing that energy is finite and must be efficiently managed, especially as inference workloads grow in India [14-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Energy is framed as a finite resource requiring careful management for AI workloads, with emphasis on efficiency and sustainability [S2][S14][S22].
MAJOR DISCUSSION POINT
Energy finiteness and efficiency
AGREED WITH
Arun Shetty, Dr. Kamakoti, Arun Shetty (later)
Argument 2
Building sovereign large language models for India’s AI ecosystem (Rizvi)
EXPLANATION
Rizvi points out India’s initiative to develop home‑grown large language models, positioning them as sovereign assets that reduce reliance on foreign AI services and support local application layers.
EVIDENCE
He references India’s 300 Gen-AI startups building on large language models, notes that India leads in application-layer development, and mentions projects like Sarvam that aim to create sovereign LLMs, aligning with the minister’s vision [18-20].
MAJOR DISCUSSION POINT
Sovereign LLM development
AGREED WITH
Arun Shetty, Dr. Kamakoti
D
Dr. Kamakoti
1 argument170 words per minute611 words215 seconds
Argument 1
Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti)
EXPLANATION
Dr. Kamakoti argues that to safeguard AI, India must develop sovereign models resistant to adversarial manipulation and establish a formal mathematical framework for trust, recognizing trust’s non‑reflexive, non‑symmetric, and context‑dependent nature.
EVIDENCE
He stresses the need for sovereign models to prevent adversarial poisoning, cites the risk of compromised training data, and outlines a trust definition based on equivalence relations, highlighting that trust lacks reflexivity, symmetry, and transitivity, thereby requiring a new formalism [52-54] and [55-63].
MAJOR DISCUSSION POINT
Trust and sovereignty in AI models
AGREED WITH
Arun Shetty, Rizvi
DISAGREED WITH
Arun Shetty
H
Honorable Minister
1 argument141 words per minute166 words70 seconds
Argument 1
Government role in providing power, water, land and ensuring AI welfare (Honorable Minister)
EXPLANATION
The Minister underscores the state’s responsibility to supply essential resources—power, water, and land—to support AI infrastructure, while framing AI development as a means to achieve welfare and happiness for all citizens.
EVIDENCE
He states that policymakers must ensure provision of power, electricity, water, and land for AI initiatives, linking these provisions to the broader agenda of AI-driven welfare and happiness for the population [139-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy remarks stress state provision of power, water and land to support AI infrastructure, echoing broader calls for resource allocation and sustainable energy policies [S2][S11][S14].
MAJOR DISCUSSION POINT
Policy support for AI infrastructure
AGREED WITH
Arun Shetty, Gokul, Rizvi
Agreements
Agreement Points
Distributed and heterogeneous compute across devices, edge cloud and data centres (Hybrid AI) is essential for scalable AI services.
Speakers: Durga, Arun Shetty, Gokul, Dr. Kamakoti, Rizvi
Device–centric inference & hybrid AI (Durga) Distributed compute across devices, edge cloud and data centers (Durga) Fit‑for‑purpose edge solutions to reduce data‑center load (Arun Shetty) Vertical‑specific edge models; memory, connectivity, thermal limits (Gokul) Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti) Finite energy & need for efficient AI energy management (Rizvi)
All speakers stress that AI workloads should be spread across a hierarchy of compute resources – from on-device inference to edge-cloud resources and finally central data-centres – to improve latency, resilience and reduce the load on large data-centres. Durga cites on-device inference of 10 billion-parameter models and a layered compute fabric [12-13][90-100]; Arun calls for fit-for-purpose edge deployments to off-load data-centres [43-44]; Gokul highlights vertical-specific edge models and the need for heterogeneous architectures [68-70][46]; Dr. Kamakoti notes that heterogeneous architectures are required for security and trust [46][55-63]; Rizvi asks about heterogeneous compute for national resilience [45].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes concerns raised about limited access to computing resources and the need for distributed infrastructure to overcome scaling barriers, as highlighted in the IGF discussion on inclusive AI and the Global AI Policy Framework which identify infrastructure gaps as a key obstacle [S59][S60].
Power and energy availability are the primary bottlenecks for AI scaling and must be addressed through efficient usage and policy support.
Speakers: Arun Shetty, Gokul, Rizvi, Honorable Minister
Power as a primary impediment; demand for sustainable compute (Arun Shetty) Power, cooling, PUE and hybrid energy strategies for scalable AI (Gokul) Finite energy & need for efficient AI energy management (Rizvi) Government role in providing power, water, land and ensuring AI welfare (Honorable Minister)
All four speakers identify power as a critical constraint. Arun quantifies a future national demand of 63 GW [43]; Gokul discusses power usage efficiency, cooling limits and hybrid energy solutions [71-77]; Rizvi reminds that energy is finite and must be managed efficiently [14-15]; the Minister stresses that the state must ensure provision of power (and other resources) for AI infrastructure [139-145].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy forums have identified power scarcity as the main limiter for AI growth, including the AI Impact Summit India noting coordination challenges for data-centre power supply [S40], the Digital Public Infrastructure briefing stressing power as the primary challenge [S58], and analyses of energy and cooling blind spots in AI strategy reports [S55][S57].
AI systems must incorporate robust security, safety and trust mechanisms to prevent hallucinations, toxicity and adversarial attacks.
Speakers: Arun Shetty, Dr. Kamakoti, Rizvi, Arun Shetty (later)
Model hallucination, toxicity and need for guardrails (Arun Shetty) Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti) Finite energy & need for efficient AI energy management (Rizvi) Safety and security aspects (Arun Shetty)
The panel repeatedly highlights the need for safety and security. Arun points out hallucinations, toxicity and the need for guardrails [112-115]; Dr. Kamakoti stresses sovereign models and a formal trust framework to counter adversarial attacks [52-54][55-63]; Rizvi raises security as a key concern for AI deployment [45]; Arun later reiterates safety and security as essential for trustworthy AI [106-115].
POLICY CONTEXT (KNOWLEDGE BASE)
Authoritative frameworks distinguish safety (hallucinations, toxicity) from security (adversarial manipulation) and stress trust layers, as described by Arun Shetty’s safety-security distinction [S37] and Salesforce’s trust-centric responsible AI framework [S38], complemented by calls for guardrails and evaluation tools in nonprofit AI deployments [S47].
Developing sovereign, high‑quality data sets and models is essential for India’s AI independence and security.
Speakers: Arun Shetty, Rizvi, Dr. Kamakoti
High‑quality, accessible data from enterprises/government to fuel AI (Arun Shetty) Building sovereign large language models for India’s AI ecosystem (Rizvi) Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti)
All three agree that India should rely on its own data and models. Arun stresses leveraging enterprise and government data to close the data gap [45-48]; Rizvi notes the push for sovereign LLMs such as Sarvam [18-20]; Dr. Kamakoti argues that sovereign models are needed to avoid adversarial poisoning and to build trust [52-54].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for sovereign AI in India is grounded in historical investment in digital infrastructure since the 19th-century telegraph era [S53] and recent statements emphasizing indigenous data and model development as strategic assets [S54].
Similar Viewpoints
Both mention the use of air‑cooled racks/carts for edge deployments, indicating a practical, low‑complexity cooling approach for distributed AI compute [93-96][71-76].
Speakers: Durga, Gokul
Device–centric inference & hybrid AI (Durga) Power, cooling, PUE and hybrid energy strategies for scalable AI (Gokul)
Both stress that energy is a finite resource and a major barrier to AI scaling, calling for efficient management of power consumption [43][14-15].
Speakers: Arun Shetty, Rizvi
Power as a primary impediment; demand for sustainable compute (Arun Shetty) Finite energy & need for efficient AI energy management (Rizvi)
Both advocate for sovereign, high‑quality data and models to ensure security and reduce reliance on external AI services [45-48][52-54].
Speakers: Arun Shetty, Dr. Kamakoti
High‑quality, accessible data from enterprises/government to fuel AI (Arun Shetty) Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti)
Both identify the need for robust security and trust frameworks to mitigate model hallucinations, toxicity and adversarial manipulation [112-115][55-63].
Speakers: Arun Shetty, Dr. Kamakoti
Model hallucination, toxicity and need for guardrails (Arun Shetty) Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti)
Both link energy provision to broader societal welfare, emphasizing that state support for power infrastructure underpins sustainable AI development [14-15][139-145].
Speakers: Rizvi, Honorable Minister
Finite energy & need for efficient AI energy management (Rizvi) Government role in providing power, water, land and ensuring AI welfare (Honorable Minister)
Unexpected Consensus
Alignment between the Minister’s policy focus on provision of power, water and land and the technical community’s emphasis on power as the primary AI bottleneck.
Speakers: Honorable Minister, Arun Shetty, Gokul, Rizvi
Government role in providing power, water, land and ensuring AI welfare (Honorable Minister) Power as a primary impediment; demand for sustainable compute (Arun Shetty) Power, cooling, PUE and hybrid energy strategies for scalable AI (Gokul) Finite energy & need for efficient AI energy management (Rizvi)
It is unexpected that a high-level policy speaker explicitly echoes the technical speakers’ detailed concerns about power availability and efficiency, signalling a rare convergence of policy and engineering perspectives on the same resource constraint [139-145][43][71-77][14-15].
POLICY CONTEXT (KNOWLEDGE BASE)
The convergence is reflected in policy briefs from the AI Impact Summit highlighting power, water and land as critical enablers for AI data-centres, matching technical analyses that flag power as the dominant bottleneck [S40][S58].
Overall Assessment

The panel shows strong consensus on four pillars: (1) a distributed, heterogeneous compute fabric (Hybrid AI); (2) power and energy efficiency as the dominant infrastructural constraint; (3) the necessity of security, safety and trust mechanisms; (4) the strategic importance of sovereign data and models. These agreements cut across technical, commercial and policy domains.

High consensus – the speakers from industry, academia and government repeatedly reinforce the same priorities, indicating that future AI strategies in India are likely to be coordinated around distributed compute architectures, sustainable energy provisioning, robust security frameworks and sovereign data ecosystems.

Differences
Different Viewpoints
Feasibility of extensive on‑device inference versus the reality of power and cooling constraints
Speakers: Durga, Gokul
Device‑centric inference & hybrid AI (Durga) Power, cooling, PUE and hybrid energy strategies for scalable AI (Gokul)
Durga asserts that modern smartphones can run a 10 billion-parameter multimodal model and glasses can run a sub-1 billion-parameter model with minimal charging, promoting device-centric inference as a core part of hybrid AI [12-13]. Gokul counters that AI scalability is fundamentally limited by power consumption, cooling requirements and the need for hybrid energy solutions, noting that 40 % of data-centre power goes to cooling and that air-cooling caps at ~25 kW per rack before liquid cooling is required, emphasizing the physical constraints on deploying large models at the edge [71-77].
POLICY CONTEXT (KNOWLEDGE BASE)
Experts have warned that AI strategies often overlook energy and cooling limitations, describing an “energy blind spot” and the need for free cooling solutions [S57][S55], underscoring the practical constraints on on-device inference.
Approach to securing AI models: formal trust framework versus practical guardrails and asset management
Speakers: Dr. Kamakoti, Arun Shetty
Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti) Model hallucination, toxicity and need for guardrails (Arun Shetty)
Dr. Kamakoti proposes building a mathematical trust framework, emphasizing that trust is non-reflexive, non-symmetric and context-dependent, and calls for sovereign models to resist adversarial attacks [55-63]. Arun Shetty focuses on operational safeguards such as detecting shadow AI assets, scanning models for vulnerabilities, and implementing guardrails to prevent toxic or hallucinated outputs, stressing practical security measures rather than a formal trust theory [112-129].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on AI governance stress a separation between high-level trust frameworks and community-driven guardrails, with recommendations for pragmatic asset-management practices over top-down mandates [S42][S48][S44].
Unexpected Differences
Optimism about device‑level large model inference versus realistic power‑and‑cooling limits
Speakers: Durga, Gokul
Device‑centric inference & hybrid AI (Durga) Power, cooling, PUE and hybrid energy strategies for scalable AI (Gokul)
Durga’s confident claim that a 10 billion-parameter model can run on a smartphone without frequent charging [12-13] contrasts with Gokul’s detailed exposition of the substantial power and cooling demands of AI workloads, suggesting that such on-device performance may be over-optimistic given current infrastructure limits [71-77].
POLICY CONTEXT (KNOWLEDGE BASE)
The same energy-and-cooling concerns that temper optimism about on-device large models have been documented in AI strategy critiques highlighting underestimation of power needs [S57][S55].
Policy‑level welfare framing versus technical implementation focus
Speakers: Honorable Minister, Technical panelists (Durga, Arun Shetty, Gokul, Dr. Kamakoti)
Government role in providing power, water, land and ensuring AI welfare (Honorable Minister) Various technical solutions on compute distribution, power, security, data (Durga, Arun Shetty, Gokul, Dr. Kamakoti)
The Minister emphasizes AI as a means to achieve welfare and happiness, focusing on provision of resources like power, water and land [139-145], while the technical speakers concentrate on detailed architectural, security and energy-efficiency measures, showing a mismatch between high-level policy goals and concrete technical pathways.
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder dialogues note a tension between high-level welfare objectives and concrete technical execution, advocating community-led solutions and clear separation of policy from implementation [S42][S43][S48].
Overall Assessment

The panel largely converged on the need for distributed, edge‑centric AI architectures, sustainable power management, and sovereign data/models. Divergences emerged around the practicality of large on‑device inference versus existing power and cooling constraints, and between a formal, theoretical trust framework versus pragmatic security guardrails.

Moderate – while participants share common goals (resilient AI, energy efficiency, security), they differ on the feasibility of device‑level large models and on the preferred security paradigm. These differences suggest that policy and industry efforts must balance ambitious technical visions with realistic infrastructure planning and develop both theoretical and practical security standards.

Partial Agreements
All three agree that moving inference to the edge and tailoring solutions to specific use‑cases reduces pressure on central data centres and improves latency and resilience. Durga describes a layered architecture from device to edge‑cloud to data centre [90-100]; Arun stresses fit‑for‑purpose edge deployments to alleviate data‑centre demand [43-44]; Gokul highlights the need to respect vertical constraints (memory, connectivity, thermal, power) when deploying edge models [68-70].
Speakers: Durga, Arun Shetty, Gokul
Distributed compute across devices, edge cloud and data centers (Durga) Fit‑for‑purpose edge solutions to reduce data‑center load (Arun Shetty) Vertical‑specific edge models; memory, connectivity, thermal limits (Gokul)
All acknowledge energy as a critical factor. Rizvi flags the finiteness of energy and the need for efficient AI workloads [14-15]; Gokul details power usage, cooling overhead and hybrid energy solutions to improve efficiency [71-77]; Durga points to on‑device inference as a way to reduce reliance on energy‑intensive data centres [12-13].
Speakers: Rizvi, Gokul, Durga
Finite energy & need for efficient AI energy management (Rizvi) Power, cooling, PUE and hybrid energy strategies for scalable AI (Gokul) Device‑centric inference & hybrid AI (Durga)
All stress the importance of sovereign, high‑quality data and models for India’s AI independence. Kamakoti calls for sovereign models to avoid adversarial poisoning [52-54]; Arun highlights leveraging enterprise/government data to create ‘machine GPTs’ [45-48]; Rizvi notes India’s push to develop sovereign LLMs such as Sarvam [18-20].
Speakers: Dr. Kamakoti, Arun Shetty, Rizvi
Sovereign models, adversarial attacks and a formal trust framework (Dr. Kamakoti) High‑quality, accessible data from enterprises/government to fuel AI (Arun Shetty) Building sovereign large language models for India’s AI ecosystem (Rizvi)
Takeaways
Key takeaways
AI workloads must be distributed across heterogeneous compute resources – from on‑device inference to edge clouds and centralized data centers – to ensure performance regardless of network connectivity (Durga, Arun Shetty, Gokul). Energy consumption is a critical bottleneck; efficient power usage, improved cooling (air‑cooled where feasible), and hybrid renewable/off‑grid energy strategies are essential for scalable AI deployment in India (Rizvi, Gokul, Arun Shetty). Security, safety, and trust are paramount; models can hallucinate, be poisoned, or contain malicious code, requiring sovereign model development, robust guardrails, and formal trust frameworks (Arun Shetty, Dr. Kamakoti). A lack of high‑quality, accessible data hampers AI progress; leveraging enterprise and government data sets to build sovereign large language models is necessary (Rizvi, Arun Shetty). Three primary impediments to enterprise AI adoption were identified: power infrastructure, compute capacity, and networking limitations; solutions must be fit‑for‑purpose and involve ecosystem collaboration (Arun Shetty). Policy makers must support AI growth by ensuring reliable power, water, and land resources, while keeping AI’s societal impact focused on welfare and inclusive benefit (Honorable Minister).
Resolutions and action items
Cisco will collaborate with ecosystem partners to create a “secure AI factory” that provides fit‑for‑purpose infrastructure for edge and data‑center inference (Arun Shetty). Stakeholders agreed to pursue development of sovereign, domain‑specific models using high‑quality Indian data sets (Rizvi, Arun Shetty). Implement AI‑driven security guardrails: asset discovery, vulnerability scanning, and policy enforcement to prevent data leakage and malicious model behavior (Arun Shetty). Promote the use of air‑cooled server carts and hybrid cooling solutions to improve PUE and reduce reliance on liquid cooling where possible (Durga, Gokul). Encourage joint efforts between government, academia, and industry to address power and water infrastructure needs for AI compute (Honorable Minister).
Unresolved issues
Specific mechanisms and timelines for establishing sovereign data repositories and the governance model for their use were not defined. Details of how heterogeneous compute will be orchestrated across devices, edge clouds, and data centers (e.g., standards, APIs, workload placement algorithms) remain open. Concrete policy actions, funding models, and regulatory frameworks to guarantee reliable power, water, and land allocation for AI infrastructure were not finalized. The methodology for quantifying and enforcing the proposed trust framework for AI models was discussed but not concretized. How to measure and certify the security and safety guardrails across diverse enterprise environments was left for future work.
Suggested compromises
Adopt air‑cooled server carts for many edge deployments instead of mandating liquid cooling, balancing performance with cost and energy efficiency. Utilize a hybrid energy mix (renewable plus conventional/off‑grid sources) to meet the high power demand while maintaining reliability. Combine centralized data‑center compute with edge inference to reduce overall compute concentration, allowing incremental upgrades rather than a single massive infrastructure build‑out.
Thought Provoking Comments
Do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? … That means you must have the ability to run inference directly on devices.
Highlights the strategic shift from cloud‑centric AI to edge inference, linking user experience consistency with resilience to network variability. It reframes the problem from “where to compute” to “how to guarantee service regardless of connectivity”.
Set the technical agenda for the whole panel. It prompted subsequent speakers (Arun Shetty, Dr. Kamakoti, Gokul) to discuss power, compute, and security implications of running models at the edge, and it anchored the later discussion on heterogeneous compute.
Speaker: Durga
Energy is finite and must be efficiently managed; India is building sovereign large language models and has ~300 Gen‑AI startups focusing on the application layer.
Introduces two macro‑level dimensions—environmental sustainability and national sovereignty—that broaden the conversation beyond pure technology to policy, economics, and climate impact.
Shifted the tone from a purely technical debate to a broader strategic one. It led others (Arun Shetty, Dr. Kamakoti) to reference power constraints and the need for sovereign models, and it set up the later policy‑focused remarks by the Minister.
Speaker: Rizvi
The three impediments for AI adoption are: power, compute, and networking; plus security/safety and a data gap that limits model quality.
Provides a concise, structured framework that categorises the major challenges, making it easier for the panel to address each area systematically.
Created a roadmap for the discussion. Subsequent speakers referenced each pillar—Dr. Kamakoti on security, Gokul on power and cooling, Durga on compute distribution—keeping the conversation focused and progressive.
Speaker: Arun Shetty
Trust is not reflexive, not symmetric, not transitive, and is context‑dependent and temporal; we need a mathematics of trust to define it rigorously.
Moves the dialogue from engineering constraints to a foundational, philosophical issue that underpins security and governance of AI systems. The equivalence‑relation analogy is both novel and clarifying.
Prompted a deeper dive into security concerns, influencing Dr. Kamakoti’s later points about adversarial AI and shaping Arun Shetty’s emphasis on guardrails and policy frameworks. It also gave the Minister a conceptual hook for his closing remarks on welfare and trust.
Speaker: Dr. Kamakoti
India is challenged by three physical resources—land, water, and power—and must adopt hybrid energy solutions and edge‑centric architectures to leapfrog connectivity gaps.
Connects technical infrastructure to the country’s unique resource constraints, introducing the idea of hybrid (renewable + off‑grid) energy and edge deployment as a national strategy.
Redirected the conversation toward practical, large‑scale deployment scenarios in India, reinforcing the earlier points about power and cooling. It also gave the Minister concrete examples to reference in his policy‑oriented closing.
Speaker: Gokul
AI models are non‑deterministic; we need to discover all AI assets, scan them for vulnerabilities, apply guardrails (NIST, MITRE, OWASP), and use AI itself to defend against malicious manipulation.
Synthesises earlier security discussions into actionable steps, linking governance frameworks with the technical reality of model behaviour, and proposes using AI for its own protection.
Provided a concrete action plan that tied together the earlier abstract concerns (trust, security, power) and gave the audience a sense of direction. It also reinforced the panel’s consensus on the need for integrated, secure, edge‑first AI deployments.
Speaker: Arun Shetty (closing remarks)
Overall Assessment

The discussion was driven forward by a series of escalating insights that moved from a technical premise (Durga’s edge inference) to broader systemic challenges (Rizvi’s energy and sovereignty), a structured problem taxonomy (Arun Shetty’s three impediments), a foundational trust framework (Dr. Kamakoti), and finally to concrete, resource‑aware deployment strategies (Gokul) and actionable security road‑maps (Arun Shetty’s closing). Each of these pivotal comments acted as a turning point, expanding the scope, deepening the analysis, and aligning the participants toward a shared vision of resilient, secure, and sustainable AI infrastructure for India.

Follow-up Questions
How important is heterogeneous compute in contributing to national resilience and securing critical infrastructure and public systems?
Seeks insight on the role of distributed compute in protecting essential services, a key factor for national security and reliability.
Speaker: Rizvi (to Dr. Kamakoti)
Can you provide practical deployment examples that demonstrate progress towards heterogeneous compute, and what steps are needed to achieve broader adoption?
Requests concrete case studies and actionable guidance to move from theory to implementation of edge‑centric AI.
Speaker: Rizvi (to Gokul)
What enterprise outcomes should be targeted in the next two to four years regarding India’s access to compute, infrastructure capacity, scale, cost efficiency, and energy efficiency?
Aims to define short‑term goals for Indian enterprises to align investments and policies with realistic AI deployment timelines.
Speaker: Rizvi (to Durga and Arun Shetty)
Assess the environmental impact and energy management challenges of large‑scale AI inference, especially in the context of finite energy resources.
Highlights the need for research on sustainable AI practices to balance performance with carbon and energy footprints.
Speaker: Rizvi
Develop sovereign large language models for India while ensuring data security and protection against adversarial manipulation.
Calls for investigation into building domestically controlled models that mitigate geopolitical and security risks.
Speaker: Rizvi
Address infrastructure constraints—power, compute, and networking—that limit AI adoption at scale.
Identifies fundamental bottlenecks that must be quantified and alleviated for widespread AI deployment.
Speaker: Arun Shetty
Study security and safety issues of AI models, including hallucinations, toxicity, and adversarial attacks, and devise mitigation strategies.
Ensures AI systems are trustworthy and safe for enterprise and public use.
Speaker: Arun Shetty; Dr. Kamakoti
Bridge the data gap by creating high‑quality, accessible, and manageable datasets for training and inference, especially for domain‑specific applications.
Data is the fuel for AI; lacking quality data hampers model effectiveness and innovation.
Speaker: Arun Shetty
Design fit‑for‑purpose edge solutions that enable inference on devices while meeting constraints of memory, connectivity, I/O, thermal limits, and power consumption.
Targeted research needed to optimize hardware and software stacks for edge AI workloads.
Speaker: Arun Shetty; Gokul
Formulate a mathematical framework for trust in AI systems, capturing context‑dependence, temporality, and asymmetry of trust relationships.
Provides a rigorous basis for building trustworthy AI governance and verification mechanisms.
Speaker: Dr. Kamakoti
Develop heterogeneous architectures for deep packet inspection that can adapt to rapidly changing malware signatures and AI‑driven threats.
Addresses the evolving security landscape where static signatures are insufficient.
Speaker: Dr. Kamakoti
Investigate cooling technologies and power‑usage‑efficiency (PUE) optimization for data centers, comparing air‑cooled carts versus liquid cooling at various power densities.
Improving cooling efficiency directly reduces operational costs and environmental impact.
Speaker: Gokul
Explore hybrid energy solutions (combining renewable, grid, and off‑grid sources) to power data centers and edge sites reliably across India’s diverse geography.
Ensures continuous AI services despite variability in power availability, supporting edge deployment in remote areas.
Speaker: Gokul
Define policy frameworks that align AI development with resource constraints (power, water, land) and promote welfare and happiness for all citizens.
Guides national strategy to balance AI advancement with sustainable resource management.
Speaker: Honorable Minister
Create mechanisms to discover, inventory, and manage shadow AI applications within enterprises to prevent uncontrolled usage and associated risks.
Shadow AI poses security, compliance, and cost challenges that need systematic oversight.
Speaker: Arun Shetty
Implement guardrails, scanning tools, and remediation processes to detect vulnerabilities in AI models and applications before deployment.
Proactive security testing is essential to prevent exploitation of AI systems.
Speaker: Arun Shetty
Establish guidelines for educational AI models, determining which data should be fed to models to avoid misinformation while supporting learning outcomes.
Balances the benefits of AI‑enhanced education with the risk of propagating inaccurate or harmful content.
Speaker: Dr. Kamakoti

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.